Search results for: Irene A. Monte
54 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells
Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani
Abstract:
Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade
Procedia PDF Downloads 42553 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 15252 Study Secondary Particle Production in Carbon Ion Beam Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Ensuring accurate radiotherapy with carbon therapy requires precise monitoring of radiation dose distribution within the patient's body. This monitoring is essential for targeted tumor treatment, minimizing harm to healthy tissues, and improving treatment effectiveness while lowering side effects. In our investigation, we employed a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo simulations. Initially, Geant4 simulations were utilized to extract the initial positions of secondary particles formed during interactions between carbon ions and water. These particles included protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we studied the relationship between the carbon ion beam and these secondary particles. Interaction Vertex Imaging (IVI) is valuable for monitoring dose distribution in carbon therapy. It provides details about the positions and amounts of secondary particles, particularly protons. The IVI method depends on charged particles produced during ion fragmentation to gather information about the range by reconstructing particle trajectories back to their point of origin, referred to as the vertex. In our simulations regarding carbon ion therapy, we observed a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the target's unique elongated geometry, which hindered the straightforward transmission of forward-generated protons. Consequently, the limited protons that emerged mostly originated from points close to the target entrance. The trajectories of fragments (protons) were approximated as straight lines, and a beam back-projection algorithm, using recorded interaction positions in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitoring of radiation dose, interaction vertex imaging
Procedia PDF Downloads 8451 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area
Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma
Abstract:
The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty
Procedia PDF Downloads 8950 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach
Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong
Abstract:
Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach
Procedia PDF Downloads 39649 Physico-Chemical Characterization of Vegetable Oils from Oleaginous Seeds (Croton megalocarpus, Ricinus communis L., and Gossypium hirsutum L.)
Authors: Patrizia Firmani, Sara Perucchini, Irene Rapone, Raffella Borrelli, Stefano Chiaberge, Manuela Grande, Rosamaria Marrazzo, Alberto Savoini, Andrea Siviero, Silvia Spera, Fabio Vago, Davide Deriu, Sergio Fanutti, Alessandro Oldani
Abstract:
According to the Renewable Energy Directive II, the use of palm oil in diesel will be gradually reduced from 2023 and should reach zero in 2030 due to the deforestation caused by its production. Eni aims at finding alternative feedstocks for its biorefineries to eliminate the use of palm oil by 2023. Therefore, the ideal vegetable oils to be used in bio-refineries are those obtainable from plants that grow in marginal lands and with low impact on food-and-feed chain; hence, Eni research is studying the possibility of using oleaginous seeds, such as castor, croton, and cotton, to extract the oils to be exploited as feedstock in bio-refineries. To verify their suitability for the upgrading processes, an analytical protocol for their characterization has been drawn up and applied. The analytical characterizations include a step of water and ashes content determination, elemental analysis (CHNS analysis, X-Ray Fluorescence, Inductively Coupled Plasma - Optical Emission Spectroscopy, ICP– Mass Spectrometry), and total acid number determination. Gas chromatography coupled to flame ionization detector (GC-FID) is used to quantify the lipid content in terms of free fatty acids, mono-, di- and triacylglycerols, and fatty acids composition. Eventually, Nuclear Magnetic Resonance and Fourier Transform-Infrared spectroscopies are exploited with GC-MS and Fourier Transform-Ion Cyclotron Resonance to study the composition of the oils. This work focuses on the GC-FID analysis of the lipid fraction of these oils, as the main constituent and of greatest interest for bio-refinery processes. Specifically, the lipid component of the extracted oil was quantified after sample silanization and transmethylation: silanization allows the elution of high-boiling compounds and is useful for determining the quantity of free acids and glycerides in oils, while transmethylation leads to a mixture of fatty acid esters and glycerol, thus allowing to evaluate the composition of glycerides in terms of Fatty Acids Methyl Esters (FAME). Cotton oil was extracted from cotton oilcake, croton oil was obtained by seeds pressing and seeds and oilcake ASE extraction, while castor oil comes from seed pressing (not performed in Eni laboratories). GC-FID analyses reported that the cotton oil is 90% constituted of triglycerides and about 6% diglycerides, while free fatty acids are about 2%. In terms of FAME, C18 acids make up 70% of the total and linoleic acid is the major constituent. Palmitic acid is present at 17.5%, while the other acids are in low concentration (<1%). Both analyzes show the presence of non-gas chromatographable compounds. Croton oils from seed pressing and extraction mainly contain triglycerides (98%). Concerning FAME, the main component is linoleic acid (approx. 80%). Oilcake croton oil shows higher abundance of diglycerides (6% vs ca 2%) and a lower content of triglycerides (38% vs 98%) compared to the previous oils. Eventually, castor oil is mostly constituted of triacylglycerols (about 69%), followed by diglycerides (about 10%). About 85.2% of total FAME is ricinoleic acid, as a constituent of triricinolein, the most abundant triglyceride of castor oil. Based on the analytical results, these oils represent feedstocks of interest for possible exploitation as advanced biofuels.Keywords: analytical protocol, biofuels, biorefinery, gas chromatography, vegetable oil
Procedia PDF Downloads 14448 A Study of Secondary Particle Production from Carbon Ion Beam for Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Achieving precise radiotherapy through carbon therapy necessitates the accurate monitoring of radiation dose distribution within the patient's body. This process is pivotal for targeted tumor treatment, minimizing harm to healthy tissues, and enhancing overall treatment effectiveness while reducing the risk of side effects. In our investigation, we adopted a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo (MC) simulations. Initially, Geant4 simulations were employed to extract the initial positions of secondary particles generated during interactions between carbon ions and water, including protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we explored the relationship between the carbon ion beam and these secondary particles. Interaction vertex imaging (IVI) proves valuable for monitoring dose distribution during carbon therapy, providing information about secondary particle locations and abundances, particularly protons. The IVI method relies on charged particles produced during ion fragmentation to gather range information by reconstructing particle trajectories back to their point of origin, known as the vertex. In the context of carbon ion therapy, our simulation results indicated a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the unique elongated geometry of the target, hindering the straightforward transmission of forward-generated protons. Consequently, the limited protons that did emerge predominantly originated from points close to the target entrance. Fragment (protons) trajectories were approximated as straight lines, and a beam back-projection algorithm, utilizing interaction positions recorded in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitor secondary proton doses, interaction vertex imaging
Procedia PDF Downloads 7847 Organ Dose Calculator for Fetus Undergoing Computed Tomography
Authors: Choonsik Lee, Les Folio
Abstract:
Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.Keywords: computed tomography, fetal dose, pregnant women, radiation dose
Procedia PDF Downloads 14046 Study of the Uncertainty Behaviour for the Specific Total Enthalpy of the Hypersonic Plasma Wind Tunnel Scirocco at Italian Aerospace Research Center
Authors: Adolfo Martucci, Iulian Mihai
Abstract:
By means of the expansion through a Conical Nozzle and the low pressure inside the Test Chamber, a large hypersonic stable flow takes place for a duration of up to 30 minutes. Downstream the Test Chamber, the diffuser has the function of reducing the flow velocity to subsonic values, and as a consequence, the temperature increases again. In order to cool down the flow, a heat exchanger is present at the end of the diffuser. The Vacuum System generates the necessary vacuum conditions for the correct hypersonic flow generation, and the DeNOx system, which follows the Vacuum System, reduces the nitrogen oxide concentrations created inside the plasma flow behind the limits imposed by Italian law. This very large, powerful, and complex facility allows researchers and engineers to reproduce entire re-entry trajectories of space vehicles into the atmosphere. One of the most important parameters for a hypersonic flowfield representative of re-entry conditions is the specific total enthalpy. This is the whole energy content of the fluid, and it represents how severe could be the conditions around a spacecraft re-entering from a space mission or, in our case, inside a hypersonic wind tunnel. It is possible to reach very high values of enthalpy (up to 45 MJ/kg) that, together with the large allowable size of the models, represent huge possibilities for making on-ground experiments regarding the atmospheric re-entry field. The maximum nozzle exit section diameter is 1950 mm, where values of Mach number very much higher than 1 can be reached. The specific total enthalpy is evaluated by means of a number of measurements, each of them concurring with its value and its uncertainty. The scope of the present paper is the evaluation of the sensibility of the uncertainty of the specific total enthalpy versus all the parameters and measurements involved. The sensors that, if improved, could give the highest advantages have so been individuated. Several simulations in Python with the METAS library and by means of Monte Carlo simulations are presented together with the obtained results and discussions about them.Keywords: hypersonic, uncertainty, enthalpy, simulations
Procedia PDF Downloads 9745 A Case Report on the Course and Outcome of a Patient Diagnosed with Trichotillomania and Major Depressive Disorder
Authors: Ziara Carmelli G. Tan, Irene Carmelle S. Tan
Abstract:
Background: Trichotillomania (TTM) and Major Depressive Disorder (MDD) are two psychiatric conditions that frequently co-occur, presenting a significant challenge for treatment due to their complex interplay. TTM involves repetitive hair-pulling, leading to noticeable hair loss and distress, while MDD is characterized by persistent low mood and loss of interest or pleasure, leading to dysfunctionality. This case report examines the intricate relationship between TTM and MDD in a young adult female, emphasizing the need for a comprehensive, multifaceted therapeutic approach to address both disorders effectively. Case Presentation: The patient is a 21-year-old female college student and youth church leader who presented with chronic hair-pulling and depressive symptoms. Her premorbid personality was marked by low self-esteem and a strong need for external validation. Despite her academic and social responsibilities and achievements, she struggled with managing her emotional distress, which was exacerbated by her family dynamics and her role within her church community. Her hair-pulling and mood symptoms were particularly triggered by self-esteem threats and feelings of inadequacy. She was diagnosed with Trichotillomania, Scalp and Major Depressive Disorder. Intervention/Management: The patient’s treatment plan was comprehensive, incorporating both pharmacological and non-pharmacological interventions. Initial pharmacologic management was Fluoxetine 20mg/day up, titrated to 40mg/day with no improvement; hence, shifted to Escitalopram 20mg/day and started with N-acetylcysteine 600mg/day with noted significant improvement in symptoms. Psychotherapeutic strategies played a crucial role in her treatment. These included supportive-expressive psychodynamic psychotherapy, which helped her explore and understand underlying emotional conflicts. Cognitive-behavioral techniques were employed to modify her maladaptive thoughts and behaviors. Grief processing was integrated to help her cope with significant losses. Family therapy was done to address conflicts and collaborate with the treatment process. Psychoeducation was provided to enhance her understanding of her condition and to empower her in her treatment journey. A suicide safety plan was developed to ensure her safety during critical periods. An interprofessional approach, which involved coordination with the Dermatology service for co-management, was also a key component of her treatment. Outcome: Over the course of 15 therapy sessions, the patient demonstrated significant improvement in both her depressive symptoms and hair-pulling behavior. Her active engagement in therapy, combined with pharmacological support, facilitated better emotional regulation and a more cohesive sense of self. Her adherence to the treatment plan, along with the collaborative efforts of the interprofessional team, contributed to her positive outcomes. Discussion: This case underscores the significance of addressing both TTM and its comorbid conditions to achieve effective treatment outcomes. The intricate interplay between TTM and MDD in the patient’s case highlights the importance of a comprehensive treatment plan that includes both pharmacological and psychotherapeutic approaches. Supportive-expressive psychodynamic psychotherapy, Cognitive-behavioral techniques, and Family therapy were particularly beneficial in addressing the complex emotional and behavioral aspects of her condition. The involvement of an interprofessional team, including dermatology co-management, was crucial in providing holistic care. Future practice should consider the benefits of such a multidisciplinary approach to managing complex cases like this, ensuring that both the psychological and physiological aspects of the disorders are adequately addressed.Keywords: cognitive-behavioral therapy, interprofessional approach, major depressive disorder, psychodynamic psychotherapy, trichotillomania
Procedia PDF Downloads 3044 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 25743 Corporate Social Responsibility and Financial Performance Complementarity in Multinational Enterprises of the EU and India: A Socio-Political Approach
Authors: Moses Pinto, Ana Paula Monte
Abstract:
The present research analyses the interactions between various categories of corporate social responsibility (CSR) that mediate the relationship between CSR and financial performance in Multinational Enterprises (MNE) in light of the present socio-political factors prevalent in the countries under observation. In the research it has been hypothesized that the absence of consensus in the empirical literature on the CSR–financial performance relationship may be explained by the existence of synergies (Complementarities) between the different CSR components. Upon investigation about whether such relationships exist, a final unbalanced panel sample of 1000 observations taken from 100 Multinational Enterprises per year functioning in the Schengen countries and one south east Asian country namely: India, over the span of 10 years i.e. from the year 2008 to 2018 has been analyzed. The empirical analysis used in the research methodology employs dynamic Panel Data in time series specifically, the system Generalized Method of Moments (GMM) which had been used to detect the varying degrees of relationships between the CSR and financial performance parameters in the background of the socio-political factors prevailing in the countries at the time and also taking into account the bilateral treaty obligations between the countries under observation. The econometric model has employed the financial ratio namely the Return on Assets (ROA) as an indicator of financial performance in order to gauge the internal performance and valuation of a firm as opposed to the Tobin’s Q that provides for the external evaluation of a firm’s financial performance which may not always be accurate. The various CSR dimensions have demonstrated significant correlations to the ‘ROA’ which include some negatively associated correlations and one positively associated correlation that is highly significant throughout the analysis of the observations, namely the correlation between the ‘ROA’ and the CSR dimension: ‘Environment’. The results provide a deeper insight in the synergistic CSR activities that managers could adapt into their Firm’s CSR strategy in order to enhance the ‘ROA’ and also to understand which interactions between the CSR dimensions can be adapted together due to their positively correlated association with each other and the ROA. The future lines of research would be inclined to investigate the effects of socio-political factors on the ROA of the MNEs through better designed econometric models.Keywords: CSR, financial performance, complementarity, sociopolitical factors
Procedia PDF Downloads 12542 Computational Study of Composite Films
Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova
Abstract:
Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.Keywords: composite films, computer modelling, image analysis, nanocomposite films
Procedia PDF Downloads 39341 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers
Authors: Shreyas Srinivas Rangan, Jurgis Porins
Abstract:
The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers
Procedia PDF Downloads 7040 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 7839 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine
Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot
Abstract:
Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns
Procedia PDF Downloads 15038 A Study of Female Casino Dealers' Job Stress and Job Satisfaction: The Case of Macau
Authors: Xinrong Zong, Tao Zhang
Abstract:
Macau is known as the Oriental Monte Carlo and its economy depends on gambling heavily. The dealer is the key position of the gambling industry, at the end of the fourth quarter of 2015, there were over 24,000 dealers among the 56,000 full-time employees in gambling industry. More than half of dealers were female. The dealer is also called 'Croupier', the main responsibilities of them are shuffling, dealing, processing chips, rolling dice game and inspecting play. Due to the limited land and small population of Macao, the government has not allowed hiring foreign domestic dealers since Macao developed temporary gambling industry. Therefore, local dealers enjoy special advantages but also bear the high stresses from work. From the middle of last year, with the reduced income of gambling, and the decline of mainland gamblers as well as VIP lounges, the working time of dealers increased greatly. Thus, many problems occurred in this condition, such as the rise of working pressures, psychological pressures and family-responsibility pressures, which may affect job satisfaction as well. Because of the less research of dealer satisfaction, and a lack of standing on feminine perspective to analyze female dealers, this study will focus on investigating the relationship between working pressure and job satisfaction from feminine view. Several issues will be discussed specifically: firstly, to understand current situation of the working pressures and job satisfactions of female dealers in different ages; secondly, to research if there is any relevance between working pressures and job satisfactions of female dealers in different ages; thirdly, to find out the relationship between dealers' working pressures and job satisfactions in different ages. This paper combined qualitative approach with quantitative approach selected samples by convenient sampling. The research showed the female dealers from diverse ages have different kinds of working pressures; second, job satisfactions of the female dealers in different ages are dissimilar; moreover, there is negative correlation between working pressure and job satisfaction of female dealer in different ages' groups; last but not the least, working pressure has a significant negative impact on job satisfaction. The research result will provide a reference value for the Macau gambling business. It is a pattern to improve dealers' working environment, to increase employees' job satisfaction, as well as to offer tourists a better service, which can help to attract more and more visitors from a good image of Macau gaming and tourism.Keywords: female dealers, job satisfaction, working pressure, Macau
Procedia PDF Downloads 29737 Simulation of Colombian Exchange Rate to Cover the Exchange Risk Using Financial Options Like Hedge Strategy
Authors: Natalia M. Acevedo, Luis M. Jimenez, Erick Lambis
Abstract:
Imperfections in the capital market are used to argue the relevance of the corporate risk management function. With corporate hedge, the value of the company is increased by reducing the volatility of the expected cash flow and making it possible to face a lower bankruptcy costs and financial difficulties, without sacrificing tax advantages for debt financing. With the propose to avoid exchange rate troubles over cash flows of Colombian exporting firms, this dissertation uses financial options, over exchange rate between Peso and Dollar, for realizing a financial hedge. In this study, a strategy of hedge is designed for an exporting company in Colombia with the objective of preventing fluctuations because, if the exchange rate down, the number of Colombian pesos that obtains the company by exports, is less than agreed. The exchange rate of Colombia is measured by the TRM (Representative Market Rate), representing the number of Colombian pesos for an American dollar. First, the TMR is modelled through the Geometric Brownian Motion, with this, the project price is simulated using Montecarlo simulations and finding the mean of TRM for three, six and twelve months. For financial hedging, currency options were used. The 6-month projection was covered with financial options on European-type currency with a strike price of $ 2,780.47 for each month; this value corresponds to the last value of the historical TRM. In the compensation of the options in each month, the price paid for the premium, calculated with the Black-Scholes method for currency options, was considered. Finally, with the modeling of prices and the Monte Carlo simulation, the effect of the exchange hedging with options on the exporting company was determined, this by means of the unit price estimate to which the dollars in the scenario without coverage were changed and scenario with coverage. After using the scenarios: is determinate that the TRM will have a bull trend and the exporting firm will be affected positively because they will get more pesos for each dollar. The results show that the financial options manage to reduce the exchange risk. The expected value with coverage is approximate to the expected value without coverage, but the 5% percentile with coverage is greater than without coverage. The foregoing indicates that in the worst scenarios the exporting companies will obtain better prices for the sale of the currencies if they cover.Keywords: currency hedging, futures, geometric Brownian motion, options
Procedia PDF Downloads 13036 Russian pipeline natural gas export strategy under uncertainty
Authors: Koryukaeva Ksenia, Jinfeng Sun
Abstract:
Europe has been a traditional importer of Russian natural gas for more than 50 years. In 2021, Russian state-owned company Gazprom supplied about a third of all gas consumed in Europe. The Russia-Europe mutual dependence in terms of natural gas supplies has been causing many concerns about the energy security of the two sides for a long period of time. These days the issue has become more urgent than ever considering recent Russian invasion in Ukraine followed by increased large-scale geopolitical conflicts, making the future of Russian natural gas supplies and global gas markets as well highly uncertain. Hence, the main purpose of this study is to get insight into the possible futures of Russian pipeline natural gas exports by a scenario planning method based on Monte-Carlo simulation within LUSS model framework, and propose Russian pipeline natural gas export strategies based on the obtained scenario planning results. The scenario analysis revealed that recent geopolitical disputes disturbed the traditional, longstanding model of Russian pipeline gas exports, and, as a result, the prospects and the pathways for Russian pipeline gas on the world markets will differ significantly from those before 2022. Specifically, our main findings show, that (i) the events of 2022 generated many uncertainties for the long-term future of Russian pipeline gas export perspectives on both western and eastern supply directions, including geopolitical, regulatory, economic, infrastructure and other uncertainties; (ii) according to scenario modelling results, Russian pipeline exports will face many challenges in the future, both on western and eastern directions. A decrease in pipeline gas exports will inevitably affect country’s natural gas production and significantly reduce fossil fuel export revenues, jeopardizing the energy security of the country; (iii) according to proposed strategies, in order to ensure the long-term stable export supplies in the changing environment, Russia may need to adjust its traditional export strategy by performing export flows and product diversification, entering new markets, adapting its contracting mechanism, increasing competitiveness and gaining a reputation of a reliable gas supplier.Keywords: Russian natural gas, Pipeline natural gas, Uncertainty, Scenario simulation, Export strategy
Procedia PDF Downloads 6035 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams
Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche
Abstract:
According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor
Procedia PDF Downloads 31834 Cost Overruns in Mega Projects: Project Progress Prediction with Probabilistic Methods
Authors: Yasaman Ashrafi, Stephen Kajewski, Annastiina Silvennoinen, Madhav Nepal
Abstract:
Mega projects either in construction, urban development or energy sectors are one of the key drivers that build the foundation of wealth and modern civilizations in regions and nations. Such projects require economic justification and substantial capital investment, often derived from individual and corporate investors as well as governments. Cost overruns and time delays in these mega projects demands a new approach to more accurately predict project costs and establish realistic financial plans. The significance of this paper is that the cost efficiency of megaprojects will improve and decrease cost overruns. This research will assist Project Managers (PMs) to make timely and appropriate decisions about both cost and outcomes of ongoing projects. This research, therefore, examines the oil and gas industry where most mega projects apply the classic methods of Cost Performance Index (CPI) and Schedule Performance Index (SPI) and rely on project data to forecast cost and time. Because these projects are always overrun in cost and time even at the early phase of the project, the probabilistic methods of Monte Carlo Simulation (MCS) and Bayesian Adaptive Forecasting method were used to predict project cost at completion of projects. The current theoretical and mathematical models which forecast the total expected cost and project completion date, during the execution phase of an ongoing project will be evaluated. Earned Value Management (EVM) method is unable to predict cost at completion of a project accurately due to the lack of enough detailed project information especially in the early phase of the project. During the project execution phase, the Bayesian adaptive forecasting method incorporates predictions into the actual performance data from earned value management and revises pre-project cost estimates, making full use of the available information. The outcome of this research is to improve the accuracy of both cost prediction and final duration. This research will provide a warning method to identify when current project performance deviates from planned performance and crates an unacceptable gap between preliminary planning and actual performance. This warning method will support project managers to take corrective actions on time.Keywords: cost forecasting, earned value management, project control, project management, risk analysis, simulation
Procedia PDF Downloads 40333 A Heteroskedasticity Robust Test for Contemporaneous Correlation in Dynamic Panel Data Models
Authors: Andreea Halunga, Chris D. Orme, Takashi Yamagata
Abstract:
This paper proposes a heteroskedasticity-robust Breusch-Pagan test of the null hypothesis of zero cross-section (or contemporaneous) correlation in linear panel-data models, without necessarily assuming independence of the cross-sections. The procedure allows for either fixed, strictly exogenous and/or lagged dependent regressor variables, as well as quite general forms of both non-normality and heteroskedasticity in the error distribution. The asymptotic validity of the test procedure is predicated on the number of time series observations, T, being large relative to the number of cross-section units, N, in that: (i) either N is fixed as T→∞; or, (ii) N²/T→0, as both T and N diverge, jointly, to infinity. Given this, it is not expected that asymptotic theory would provide an adequate guide to finite sample performance when T/N is "small". Because of this, we also propose and establish asymptotic validity of, a number of wild bootstrap schemes designed to provide improved inference when T/N is small. Across a variety of experimental designs, a Monte Carlo study suggests that the predictions from asymptotic theory do, in fact, provide a good guide to the finite sample behaviour of the test when T is large relative to N. However, when T and N are of similar orders of magnitude, discrepancies between the nominal and empirical significance levels occur as predicted by the first-order asymptotic analysis. On the other hand, for all the experimental designs, the proposed wild bootstrap approximations do improve agreement between nominal and empirical significance levels, when T/N is small, with a recursive-design wild bootstrap scheme performing best, in general, and providing quite close agreement between the nominal and empirical significance levels of the test even when T and N are of similar size. Moreover, in comparison with the wild bootstrap "version" of the original Breusch-Pagan test our experiments indicate that the corresponding version of the heteroskedasticity-robust Breusch-Pagan test appears reliable. As an illustration, the proposed tests are applied to a dynamic growth model for a panel of 20 OECD countries.Keywords: cross-section correlation, time-series heteroskedasticity, dynamic panel data, heteroskedasticity robust Breusch-Pagan test
Procedia PDF Downloads 43232 Energy Storage Modelling for Power System Reliability and Environmental Compliance
Authors: Rajesh Karki, Safal Bhattarai, Saket Adhikari
Abstract:
Reliable and economic operation of power systems are becoming extremely challenging with large scale integration of renewable energy sources due to the intermittency and uncertainty associated with renewable power generation. It is, therefore, important to make a quantitative risk assessment and explore the potential resources to mitigate such risks. Probabilistic models for different energy storage systems (ESS), such as the flywheel energy storage system (FESS) and the compressed air energy storage (CAES) incorporating specific charge/discharge performance and failure characteristics suitable for probabilistic risk assessment in power system operation and planning are presented in this paper. The proposed methodology used in FESS modelling offers flexibility to accommodate different configurations of plant topology. It is perceived that CAES has a high potential for grid-scale application, and a hybrid approach is proposed, which embeds a Monte-Carlo simulation (MCS) method in an analytical technique to develop a suitable reliability model of the CAES. The proposed ESS models are applied to a test system to investigate the economic and reliability benefits of the energy storage technologies in system operation and planning, as well as to assess their contributions in facilitating wind integration during different operating scenarios. A comparative study considering various storage system topologies are also presented. The impacts of failure rates of the critical components of ESS on the expected state of charge (SOC) and the performance of the different types of ESS during operation are illustrated with selected studies on the test system. The paper also applies the proposed models on the test system to investigate the economic and reliability benefits of the different ESS technologies and to evaluate their contributions in facilitating wind integration during different operating scenarios and system configurations. The conclusions drawn from the study results provide valuable information to help policymakers, system planners, and operators in arriving at effective and efficient policies, investment decisions, and operating strategies for planning and operation of power systems with large penetrations of renewable energy sources.Keywords: flywheel energy storage, compressed air energy storage, power system reliability, renewable energy, system planning, system operation
Procedia PDF Downloads 13031 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 16930 Italian Sign Language and Deafness in a North-Italian Border Region: Results of Research on the Linguistic Needs of Teachers and Students
Authors: Maria Tagarelli De Monte
Abstract:
In 2021, the passage of the law recognizing Italian Sign Language (LIS) as the language of the Italian deaf minority was the input for including this visual-gestural language in the curricula of interpreters and translators choosing the academic setting for their training. Yet, a gap remains concerning LIS education of teachers and communication assistants as referring figures for people who are deaf or hard of hearing in mainstream education. As well documented in the related scientific literature, deaf children often experience severe difficulties with the languages spoken in the country where they grow up, manifesting in all levels of literacy competence. In the research introduced here, the experience of deaf students (and their teachers) attending schools is explored in areas that are characterized by strong native bilingualism, such as Friuli-Venezia Giulia (FVG), facing Italian Northeast borders. This region is peculiar as the native population may be bilingual Italian and Friulian (50% of the local population), German, and/or Slovenian. The research involved all schools of all levels in Friuli to understand the relationship between the language skills expressed by teachers and those shown by deaf learners with a background in sign language. In addition to collecting specific information on the degree of preparation of teachers in deaf-related matters and LIS, the research has allowed to highlight the role, often poorly considered, covered by the communication assistants who work alongside deaf students. On several occasions, teachers and assistants were unanimous in affirming the importance of mutual collaboration and adequate consideration of the educational-rehabilitative history of the deaf child and her family. The research was based on a mixed method of structured questionnaires and semi-structured interviews with the referring teachers. As a result, a varied and complex framework emerged, showing an asymmetry in preparing personnel dedicated to the deaf learner. Considering how Italian education has long invested in creating an inclusive and accessible school system (i.e. with the "Ten Theses for Democratic Language Education"), a constructive analysis will complete the discussion in an attempt to understand how linguistic (and modal) differences can become levers of inclusion.Keywords: FVG, LIS, linguistic needs, deafness, teacher education, bilingual bimodal children, communication assistants, inclusion model
Procedia PDF Downloads 4529 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 14628 Flash Flood in Gabes City (Tunisia): Hazard Mapping and Vulnerability Assessment
Authors: Habib Abida, Noura Dahri
Abstract:
Flash floods are among the most serious natural hazards that have disastrous environmental and human impacts. They are associated with exceptional rain events, characterized by short durations, very high intensities, rapid flows and small spatial extent. Flash floods happen very suddenly and are difficult to forecast. They generally cause damage to agricultural crops and property, infrastructures, and may even result in the loss of human lives. The city of Gabes (South-eastern Tunisia) has been exposed to numerous damaging floods because of its mild topography, clay soil, high urbanization rate and erratic rainfall distribution. The risks associated with this situation are expected to increase further in the future because of climate change, deemed responsible for the increase of the frequency and the severity of this natural hazard. Recently, exceptional events hit Gabes City causing death and major property losses. A major flooding event hit the region on June 2nd, 2014, causing human deaths and major material losses. It resulted in the stagnation of storm water in the numerous low zones of the study area, endangering thereby human health and causing disastrous environmental impacts. The characterization of flood risk in Gabes Watershed (South-eastern Tunisia) is considered an important step for flood management. Analytical Hierarchy Process (AHP) method coupled with Monte Carlo simulation and geographic information system were applied to delineate and characterize flood areas. A spatial database was developed based on geological map, digital elevation model, land use, and rainfall data in order to evaluate the different factors susceptible to affect flood analysis. Results obtained were validated by remote sensing data for the zones that showed very high flood hazard during the extreme rainfall event of June 2014 that hit the study basin. Moreover, a survey was conducted from different areas of the city in order to understand and explore the different causes of this disaster, its extent and its consequences.Keywords: analytical hierarchy process, flash floods, Gabes, remote sensing, Tunisia
Procedia PDF Downloads 10927 An Approach for the Capture of Carbon Dioxide via Polymerized Ionic Liquids
Authors: Ghassan Mohammad Alalawi, Abobakr Khidir Ziyada, Abdulmajeed Khan
Abstract:
A potential alternative or next-generation CO₂-selective separation medium that has lately been suggested is ionic liquids (ILs). It is more facile to "tune" the solubility and selectivity of CO₂ in ILs compared to organic solvents via modification of the cation and/or anion structures. Compared to ionic liquids at ambient temperature, polymerized ionic liquids exhibited increased CO₂ sorption capacities and accelerated sorption/desorption rates. This research aims to investigate the correlation between the CO₂ sorption rate and capacity of poly ionic liquids (pILs) and the chemical structure of these substances. The dependency of sorption on the ion conductivity of the pILs' cations and anions is one of the theories we offered to explain the attraction between CO₂ and pILs. This assumption was supported by the Monte Carlo molecular dynamics simulations results, which demonstrated that CO₂ molecules are localized around both cations and anions and that their sorption depends on the cations' and anions' ion conductivities. Polymerized ionic liquids are synthesized to investigate the impact of substituent alkyl chain length, cation, and anion on CO₂ sorption rate and capacity. Three stages are involved in synthesizing the pILs under study: first, trialkyl amine and vinyl benzyl chloride are directly quaternized to obtain the required cation. Next, anion exchange is performed, and finally, the obtained IL is polymerized to form the desired product (pILs). The synthesized pILs' structures were confirmed using elemental analysis and NMR. The synthesized pILs are characterized by examining their structure topology, chloride content, density, and thermal stability using SEM, ion chromatography (using a Metrohm Model 761 Compact IC apparatus), ultrapycnometer, and TGA. As determined by the CO₂ sorption results using a magnetic suspension balance (MSB) apparatus, the sorption capacity of pILs is dependent on the cation and anion ion conductivities. The anion's size also influences the CO₂ sorption rate and capacity. It was discovered that adding water to pILs caused a dramatic, systematic enlargement of pILs resulting in a significant increase in their capacity to absorb CO₂ under identical conditions, contingent on the type of gas, gas flow, applied gas pressure, and water content of the pILs. Along with its capacity to increase surface area through expansion, water also possesses highly high ion conductivity for cations and anions, enhancing its ability to absorb CO₂.Keywords: polymerized ionic liquids, carbon dioxide, swelling, characterization
Procedia PDF Downloads 6226 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads
Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li
Abstract:
Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability
Procedia PDF Downloads 22525 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 340