Search results for: simulated driving
114 Regulatory and Economic Challenges of AI Integration in Cyber Insurance
Authors: Shreyas Kumar, Mili Shangari
Abstract:
Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware
Procedia PDF Downloads 33113 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks
Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe
Abstract:
The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D
Procedia PDF Downloads 499112 An Action Toolkit for Health Care Services Driving Disability Inclusion in Universal Health Coverage
Authors: Jill Hanass-Hancock, Bradley Carpenter, Samantha Willan, Kristin Dunkle
Abstract:
Access to quality health care for persons with disabilities is the litmus test in our strive toward universal health coverage. Persons with disabilities experience a variety of health disparities related to increased health risks, greater socioeconomic challenges, and persistent ableism in the provision of health care. In low- and middle-income countries, the support needed to address the diverse needs of persons with disabilities and close the gaps in inclusive and accessible health care can appear overwhelming to staff with little knowledge and tools available. An action-orientated disability inclusion toolkit for health facilities was developed through consensus-building consultations and field testing in South Africa. The co-creation of the toolkit followed a bottom-up approach with healthcare staff and persons with disabilities in two developmental cycles. In cycle one, a disability facility assessment tool was developed to increase awareness of disability accessibility and service delivery gaps in primary healthcare services in a simple and action-orientated way. In cycle two, an intervention menu was created, enabling staff to respond to identified gaps and improve accessibility and inclusion. Each cycle followed five distinct steps of development: a review of needs and existing tools, design of the draft tool, consensus discussion to adapt the tool, pilot-testing and adaptation of the tool, and identification of the next steps. The continued consultations, adaptations, and field-testing allowed the team to discuss and test several adaptations while co-creating a meaningful and feasible toolkit with healthcare staff and persons with disabilities. This approach led to a simplified tool design with ‘key elements’ needed to achieve universal health coverage: universal design of health facilities, reasonable accommodation, health care worker training, and care pathway linkages. The toolkit was adapted for paper or digital data entry, produces automated, instant facility reports, and has easy-to-use training guides and online modules. The cyclic approach enabled the team to respond to emerging needs. The pilot testing of the facility assessment tool revealed that healthcare workers took significant actions to change their facilities after an assessment. However, staff needed information on how to improve disability accessibility and inclusion, where to acquire accredited training, and how to improve disability data collection, referrals, and follow-up. Hence, intervention options were needed for each ‘key element’. In consultation with representatives from the health and disability sectors, tangible and feasible solutions/interventions were identified. This process included the development of immediate/low-cost and long-term solutions. The approach gained buy-in from both sectors, who called for including the toolkit in the standard quality assessments for South Africa’s health care services. Furthermore, the process identified tangible solutions for each ‘key element’ and highlighted where research and development are urgently needed. The cyclic and consultative approach enabled the development of a feasible facility assessment tool and a complementary intervention menu, moving facilities toward universal health coverage for and persons with disabilities in low- or better-resourced contexts while identifying gaps in the availability of interventions.Keywords: public health, disability, accessibility, inclusive health care, universal health coverage
Procedia PDF Downloads 77111 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison
Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison
Procedia PDF Downloads 162110 The Effect of Nanocomposite on the Release of Imipenem on Bacteria Causing Infections with Implants
Authors: Mohammad Hossein Pazandeh, Monir Doudi, Sona Rostampour Yasouri
Abstract:
—Results The prudent administration of antibiotics aims to avoid the side effects and the microbes' resistance to antibiotics. An approach developing methods of local administration of antibiotics is especially required for localized infections caused by bacterial colonization of medical devices or implant materials. Among the wide variety of materials used as drug delivery systems, bioactive glasses (BG) have large utilization in regenerative medicine . firstly, the production of bioactive glass/nickel oxide/tin dioxide nanocomposite using sol-gel method, and then, the controlled release of imipenem from the double metal oxide/bioactive glass nanocomposite, and finally, the investigation of the antibacterial property of the nanocomposite. against a number of implant-related infectious agents. In this study, BG/SnO2 and BG/NiO single systema with different metal oxide present and BG/NiO/SnO2 nanocomposites were synthesized by sol-gel as drug carriers for tetracycline and imepinem. These two antibiotics were widely used for osteomyelitis because of its favorable penetration and bactericidal effect on all the probable osteomyelitis pathogens. The antibacterial activity of synthesized samples were evaluated against Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa as bacteria model using disk diffusion method. The BG modification using metal oxides results to antibacterial property of samples containing metal oxide with highest efficiency for nancomposite. bioactivity of all samples was assessed by determining the surface morphology, structural and composition changes using scanning electron microscopy (SEM), FTIR and X-ray diffraction (XRD) spectroscopy, respectively, after soaking in simulated body fluid (SBF) for 28 days. The hydroxyapatite formation was clearly observed as a bioactivity measurement. Then, BG nanocomposite sample was loaded using two antibiotics, separately and their release profiles were studied. The BG nancomposite sample was shown the slow and continuous drug releasing for a period of 72 hours which is desirable for a drug delivery system. The loaded antibiotic nanocomposite sample retaining antibacterial property and showing inactivation effect against bacteria under test. The modified bioactive glass forming hydroxyapatite with controlled release drug and effective against bacterial infections can be introduced as scaffolds for bone implants after clinical trials for biomedical applications . Considering the formation of biofilm by infectious bacteria after sticking on the surfaces of implants, medical devices, etc. Also, considering the complications of traditional methods, solving the problems caused by the above-mentioned microorganisms in technical and biomedical industries was one of the necessities of this research.Keywords: antibacterial, bioglass, drug delivery system, sol- gel
Procedia PDF Downloads 60109 Numerical Simulation on Two Components Particles Flow in Fluidized Bed
Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi
Abstract:
Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow
Procedia PDF Downloads 326108 Recent Advances in Research on Carotenoids: From Agrofood Production to Health Outcomes
Authors: Antonio J. Melendez-Martinez
Abstract:
Beyond their role as natural colorants, some carotenoids are provitamins A and may be involved in health-promoting biological actions and contribute to reducing the risk of developing non-communicable diseases, including several types of cancer, cardiovascular disease, eye conditions, skin disorders or metabolic disorders. Given the versatility of carotenoids, the COST-funded European network to advance carotenoid research and applications in agro-food and health (EUROCAROTEN) is aimed at promoting health through the diet and increasing well-being by means. Stakeholders from 38 countries participate in this network, and one of its main objectives is to promote research on little-studied carotenoids. In this contribution, recent advances of our research group and collaborators in the study of two such understudied carotenoids, namely phytoene and phytofluene, the colorless carotenoids, are outlined. The study of these carotenoids is important as they have been largely neglected despite they are present in our diets, fluids, and tissues, and evidence is accumulating that they may be involved in health-promoting actions. More specifically, studies on their levels in diverse tomato and orange varieties were carried out as well as on their potential bioavailability from different dietary sources. Furthermore, the potential effect of these carotenoids on an animal model subjected to oxidative stress was evaluated. The tomatoes were grown in research greenhouses, and some of them were subjected to regulated deficit irrigation, a sustainable agronomic practice. The citrus samples were obtained from an experimental field. The levels of carotenoids were assessed using HPLC according to routine methodologies followed in our lab. Regarding the potential bioavailability (bioaccessibility) studies, different products containing colorless carotenoids, like fruits, juices, were subjected to simulated in vitro digestions, and their incorporation into mixed micelles was assessed. The effect of the carotenoids on oxidative stress was evaluated on the Caenorhabditis elegans model. For that purpose, the worms were subjected to oxidative stress by means of a hydrogen peroxide challenge. In relation to the presence of colorless carotenoids in tomatoes and orange varieties, it was observed that they are widespread in such products and that there are mutants with very high quantities of them, for instance, the Cara Cara or Pinalate mutant oranges. The studies on their bioaccessibility revealed that, in general, phytoene and phytofluene are more bioaccessible than other common dietary carotenoids, probably due to their distinctive chemical structure. About the in vivo antioxidant capacity of phytoene and phytofluene, it was observed that they both exerted antioxidant effects at certain doses. In conclusion, evidence on the importance of phytoene and phytofluene as dietary easily bioavailable and antioxidant carotenoids has been obtained in recent studies from our group, which can be important shortly to innovate in health-promotion through the development of functional foods and related products.Keywords: carotenoids, health, functional foods, nutrition, phytoene, phytofluene
Procedia PDF Downloads 103107 Three Dimensional Computational Fluid Dynamics Simulation of Wall Condensation inside Inclined Tubes
Authors: Amirhosein Moonesi Shabestary, Eckhard Krepper, Dirk Lucas
Abstract:
The current PhD project comprises CFD-modeling and simulation of condensation and heat transfer inside horizontal pipes. Condensation plays an important role in emergency cooling systems of reactors. The emergency cooling system consists of inclined horizontal pipes which are immersed in a tank of subcooled water. In the case of an accident the water level in the core is decreasing, steam comes in the emergency pipes, and due to the subcooled water around the pipe, this steam will start to condense. These horizontal pipes act as a strong heat sink which is responsible for a quick depressurization of the reactor core when any accident happens. This project is defined in order to model all these processes which happening in the emergency cooling systems. The most focus of the project is on detection of different morphologies such as annular flow, stratified flow, slug flow and plug flow. This project is an ongoing project which has been started 1 year ago in Helmholtz Zentrum Dresden Rossendorf (HZDR), Fluid Dynamics department. In HZDR most in cooperation with ANSYS different models are developed for modeling multiphase flows. Inhomogeneous MUSIG model considers the bubble size distribution and is used for modeling small-scaled dispersed gas phase. AIAD (Algebraic Interfacial Area Density Model) is developed for detection of the local morphology and corresponding switch between them. The recent model is GENTOP combines both concepts. GENTOP is able to simulate co-existing large-scaled (continuous) and small-scaled (polydispersed) structures. All these models are validated for adiabatic cases without any phase change. Therefore, the start point of the current PhD project is using the available models and trying to integrate phase transition and wall condensing models into them. In order to simplify the idea of condensation inside horizontal tubes, 3 steps have been defined. The first step is the investigation of condensation inside a horizontal tube by considering only direct contact condensation (DCC) and neglect wall condensation. Therefore, the inlet of the pipe is considered to be annular flow. In this step, AIAD model is used in order to detect the interface. The second step is the extension of the model to consider wall condensation as well which is closer to the reality. In this step, the inlet is pure steam, and due to the wall condensation, a liquid film occurs near the wall which leads to annular flow. The last step will be modeling of different morphologies which are occurring inside the tube during the condensation via using GENTOP model. By using GENTOP, the dispersed phase is able to be considered and simulated. Finally, the results of the simulations will be validated by experimental data which will be available also in HZDR.Keywords: wall condensation, direct contact condensation, AIAD model, morphology detection
Procedia PDF Downloads 304106 Climate Change and Landslide Risk Assessment in Thailand
Authors: Shotiros Protong
Abstract:
The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand
Procedia PDF Downloads 564105 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 114104 Development of a Mixed-Reality Hands-Free Teleoperated Robotic Arm for Construction Applications
Authors: Damith Tennakoon, Mojgan Jadidi, Seyedreza Razavialavi
Abstract:
With recent advancements of automation in robotics, from self-driving cars to autonomous 4-legged quadrupeds, one industry that has been stagnant is the construction industry. The methodologies used in a modern-day construction site consist of arduous physical labor and the use of heavy machinery, which has not changed over the past few decades. The dangers of a modern-day construction site affect the health and safety of the workers due to performing tasks such as lifting and moving heavy objects and having to maintain unhealthy posture to complete repetitive tasks such as painting, installing drywall, and laying bricks. Further, training for heavy machinery is costly and requires a lot of time due to their complex control inputs. The main focus of this research is using immersive wearable technology and robotic arms to perform the complex and intricate skills of modern-day construction workers while alleviating the physical labor requirements to perform their day-to-day tasks. The methodology consists of mounting a stereo vision camera, the ZED Mini by Stereolabs, onto the end effector of an industrial grade robotic arm, streaming the video feed into the Virtual Reality (VR) Meta Quest 2 (Quest 2) head-mounted display (HMD). Due to the nature of stereo vision, and the similar field-of-views between the stereo camera and the Quest 2, human-vision can be replicated on the HMD. The main advantage this type of camera provides over a traditional monocular camera is it gives the user wearing the HMD a sense of the depth of the camera scene, specifically, a first-person view of the robotic arm’s end effector. Utilizing the built-in cameras of the Quest 2 HMD, open-source hand-tracking libraries from OpenXR can be implemented to track the user’s hands in real-time. A mixed-reality (XR) Unity application can be developed to localize the operator's physical hand motions with the end-effector of the robotic arm. Implementing gesture controls will enable the user to move the robotic arm and control its end-effector by moving the operator’s arm and providing gesture inputs from a distant location. Given that the end effector of the robotic arm is a gripper tool, gripping and opening the operator’s hand will translate to the gripper of the robot arm grabbing or releasing an object. This human-robot interaction approach provides many benefits within the construction industry. First, the operator’s safety will be increased substantially as they can be away from the site-location while still being able perform complex tasks such as moving heavy objects from place to place or performing repetitive tasks such as painting walls and laying bricks. The immersive interface enables precision robotic arm control and requires minimal training and knowledge of robotic arm manipulation, which lowers the cost for operator training. This human-robot interface can be extended to many applications, such as handling nuclear accident/waste cleanup, underwater repairs, deep space missions, and manufacturing and fabrication within factories. Further, the robotic arm can be mounted onto existing mobile robots to provide access to hazardous environments, including power plants, burning buildings, and high-altitude repair sites.Keywords: construction automation, human-robot interaction, hand-tracking, mixed reality
Procedia PDF Downloads 80103 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143102 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress
Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang
Abstract:
The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology
Procedia PDF Downloads 90101 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems
Authors: Georgi Y. Georgiev, Matthew Brouillet
Abstract:
This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.Keywords: complexity, self-organization, agent based modelling, efficiency
Procedia PDF Downloads 68100 Reducing System Delay to Definitive Care For STEMI Patients, a Simulation of Two Different Strategies in the Brugge Area, Belgium
Authors: E. Steen, B. Dewulf, N. Müller, C. Vandycke, Y. Vandekerckhove
Abstract:
Introduction: The care for a ST-elevation myocardial infarction (STEMI) patient is time-critical. Reperfusion therapy within 90 minutes of initial medical contact is mandatory in the improvement of the outcome. Primary percutaneous coronary intervention (PCI) without previous fibrinolytic treatment, is the preferred reperfusion strategy in patients with STEMI, provided it can be performed within guideline-mandated times. Aim of the study: During a one year period (January 2013 to December 2013) the files of all consecutive STEMI patients with urgent referral from non-PCI facilities for primary PCI were reviewed. Special attention was given to a subgroup of patients with prior out-of-hospital medical contact generated by the 112-system. In an effort to reduce out-of-hospital system delay to definitive care a change in pre-hospital 112 dispatch strategies is proposed for these time-critical patients. Actual time recordings were compared with travel time simulations for two suggested scenarios. A first scenario (SC1) involves the decision by the on scene ground EMS (GEMS) team to transport the out-of-hospital diagnosed STEMI patient straight forward to a PCI centre bypassing the nearest non-PCI hospital. Another strategy (SC2) explored the potential role of helicopter EMS (HEMS) where the on scene GEMS team requests a PCI-centre based HEMS team for immediate medical transfer to the PCI centre. Methods and Results: 49 (29,1% of all) STEMI patients were referred to our hospital for emergency PCI by a non-PCI facility. 1 file was excluded because of insufficient data collection. Within this analysed group of 48 secondary referrals 21 patients had an out-of-hospital medical contact generated by the 112-system. The other 27 patients presented at the referring emergency department without prior contact with the 112-system. The table below shows the actual time data from first medical contact to definitive care as well as the simulated possible gain of time for both suggested strategies. The PCI-team was always alarmed upon departure from the referring centre excluding further in-hospital delay. Time simulation tools were similar to those used by the 112-dispatch centre. Conclusion: Our data analysis confirms prolonged reperfusion times in case of secondary emergency referrals for STEMI patients even with the use of HEMS. In our setting there was no statistical difference in gain of time between the two suggested strategies, both reducing the secondary referral generated delay with about one hour and by this offering all patients PCI within the guidelines mandated time. However, immediate HEMS activation by the on scene ground EMS team for transport purposes is preferred. This ensures a faster availability of the local GEMS-team for its community. In case these options are not available and the guideline-mandated times for primary PCI are expected to be exceeded, primary fibrinolysis should be considered in a non-PCI centre.Keywords: STEMI, system delay, HEMS, emergency medicine
Procedia PDF Downloads 31999 Trajectory Optimization for Autonomous Deep Space Missions
Authors: Anne Schattel, Mitja Echim, Christof Büskens
Abstract:
Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.
Procedia PDF Downloads 41298 Current Zonal Isolation Regulation and Standards: A Compare and Contrast Review in Plug and Abandonment
Authors: Z. A. Al Marhoon, H. S. Al Ramis, C. Teodoriu
Abstract:
Well-integrity is one of the major elements considered for drilling geothermal, oil, and gas wells. Well-integrity is minimizing the risk of unplanned fluid flow in the well bore throughout the well lifetime. Well integrity is maximized by applying technical concepts along with practical practices and strategic planning. These practices are usually governed by standardization and regulation entities. Practices during well construction can affect the integrity of the seal at the time of abandonment. On the other hand, achieving a perfect barrier system is impracticable due to the needed cost. This results in a needed balance between regulations requirements and practical applications. The guidelines are only effective when they are attainable in practical applications. Various governmental regulations and international standards have different guidelines on what constitutes high-quality isolation from unwanted flow. Each regulating or standardization body differ in requirements based on the abandonment objective. Some regulation account more for the environmental impact, water table contamination, and possible leaks. Other regulation might lean towards driving more economical benefits while achieving an acceptable isolation criteria. The research methodology used in this topic is derived from a literature review method combined with a compare and contrast analysis. The literature review on various zonal isolation regulations and standards has been conducted. A review includes guidelines from NORSOK (Norwegian governing entity), BSEE (USA offshore governing entity), API (American Petroleum Institute) combined with ISO (International Standardization Organization). The compare and contrast analysis is conducted by assessing the objective of each abandonment regulations and standardization. The current state of well barrier regulation is in balancing action. From one side of this balance, the environmental impact and complete zonal isolation is considered. The other side of the scale is practical application and associated cost. Some standards provide a fair amount of details concerning technical requirements and are often flexible with the needed associated cost. These guidelines cover environmental impact with laws that prevent major or disastrous environmental effects of improper sealing of wells. Usually these regulations are concerned with the near future of sealing rather than long-term. Consequently, applying these guidelines become more feasible from a cost point of view to the required plugging entities. On the other hand, other regulation have well integrity procedures and regulations that lean toward more restrictions environmentally with an increased associated cost requirements. The environmental impact is detailed and covered with its entirety, including medium to small environmental impact in barrier installing operations. Clear and precise attention to long-term leakage prevention is present in these regulations. The result of the compare and contrast analysis of the literature showed that there are various objectives that might tip the scale from one side of the balance (cost) to the other (sealing quality) especially in reference to zonal isolation. Furthermore, investing in initial well construction is a crucial part of ensuring safe final well abandonment. The safety and the cost saving at the end of the well life cycle is dependent upon a well-constructed isolation systems at the beginning of the life cycle. Long term studies on zonal isolation using various hydraulic or mechanical materials need to take place to further assess permanently abandoned wells to achieve the desired balance. Well drilling and isolation techniques will be more effective when they are operationally feasible and have reasonable associated cost to aid the local economy.Keywords: plug and abandon, P&A regulation, P&A standards, international guidelines, gap analysis
Procedia PDF Downloads 13397 Bioactive Substances-Loaded Water-in-Oil/Oil-in-Water Emulsions for Dietary Supplementation in the Elderly
Authors: Agnieszka Markowska-Radomska, Ewa Dluska
Abstract:
Maintaining a bioactive substances dense diet is important for the elderly, especially to prevent diseases and to support healthy ageing. Adequate bioactive substances intake can reduce the risk of developing chronic diseases (e.g. cardiovascular, osteoporosis, neurodegenerative syndromes, diseases of the oral cavity, gastrointestinal (GI) disorders, diabetes, and cancer). This can be achieved by introducing a comprehensive supplementation of components necessary for the proper functioning of the ageing body. The paper proposes the multiple emulsions of the W1/O/W2 (water-in-oil-in-water) type as carriers for effective co-encapsulation and co-delivery of bioactive substances in supplementation of the elderly. Multiple emulsions are complex structured systems ("drops in drops"). The functional structure of the W1/O/W2 emulsion enables (i) incorporation of one or more bioactive components (lipophilic and hydrophilic); (ii) enhancement of stability and bioavailability of encapsulated substances; (iii) prevention of interactions between substances, as well as with the external environment, delivery to a specific location; and (iv) release in a controlled manner. The multiple emulsions were prepared by a one-step method in the Couette-Taylor flow (CTF) contactor in a continuous manner. In general, a two-step emulsification process is used to obtain multiple emulsions. The paper contains a proposal of emulsion functionalization by introducing pH-responsive biopolymer—carboxymethylcellulose sodium salt (CMC-Na) to the external phase, which made it possible to achieve a release of components controlled by the pH of the gastrointestinal environment. The membrane phase of emulsions was soybean oil. The W1/O/W2 emulsions were evaluated for their characteristics (drops size/drop size distribution, volume packing fraction), encapsulation efficiency and stability during storage (to 30 days) at 4ºC and 25ºC. Also, the in vitro multi-substance co-release process were investigated in a simulated gastrointestinal environment (different pH and composition of release medium). Three groups of stable multiple emulsions were obtained: emulsions I with co-encapsulated vitamins B12, B6 and resveratrol; emulsions II with vitamin A and β-carotene; and emulsions III with vitamins C, E and D3. The substances were encapsulated in the appropriate emulsion phases depending on the solubility. For all emulsions, high encapsulation efficience (over 95%) and high volume packing fraction of internal droplets (0.54-0.76) were reached. In addition, due to the presence of a polymer (CMC-Na) with adhesive properties, high encapsulation stability during emulsions storage were achieved. The co-release study of encapsulated bioactive substances confirmed the possibility to modify the release profiles. It was found that the releasing process can be controlled through the composition, structure, physicochemical parameters of emulsions and pH of the release medium. The results showed that the obtained multiple emulsions might be used as potential liquid complex carriers for controlled/modified/site-specific co-delivery of bioactive substances in dietary supplementation in the elderly.Keywords: bioactive substance co-release, co-encapsulation, elderly supplementation, multiple emulsion
Procedia PDF Downloads 19896 Electromyographic Analysis of Biceps Brachii during Golf Swing and Review of Its Impact on Return to Play Following Tendon Surgery
Authors: Amin Masoumiganjgah, Luke Salmon, Julianne Burnton, Fahimeh Bagheri, Gavin Lenton, S. L. Ezekial Tan
Abstract:
Introduction: The incidence of proximal biceps tenodesis and acute distal biceps repair is increasing, and rehabilitation protocols following both are variable. Golf is a popular sport within Australia, and the Gold Coast has become a Mecca for golfers, with more courses per capita than anywhere else in the world. Currently, there are no clear guidelines regarding return to golf play following biceps procedures. The aim of this study was to determine biceps brachii activation during the golf swing through electromyographic analysis, and subsequently, aid in rehabilitation guidelines and return to golf following tenodesis and repair. Methods: Subjects were amateur golfers with no previous upper limb surgery. Surface electromyography (EMG) and high-speed video recording were used to analyse activation of the left and right biceps brachii and the anterior deltoid during the golf swing. Each participant’s maximum voluntary contraction (MVC) was recorded, and they were then required to hit a golf ball aiming for specific distances of 2, 50, 100 and 150 metres at a driving range. Noraxon myoResearch and Matlab were used for data analysis. Mean % MVC was calculated for leading and trailing arms during the full swing and its’ 4 phases: back-swing, acceleration, early follow-through and late follow-through. Results: 12 golfers (2 female and 10 male), participated in the study. Median age was 27 (25 – 38), with all being right handed. Over all distances, the mean activation of the short and long head of biceps brachii was < 10% through the full swing. When breaking down the 50, 100 and 150m swing into phases, mean MVC activation was lowest in backswing (5.1%), followed by acceleration (9.7%), early follow-through (9.2%), and late follow-through (21.4%). There was more variation and slightly higher activation in the right biceps (trailing arm) in backswing, acceleration, and early follow-through; with higher activation in the leading arm in late follow-through (25.4% leading, 17.3% trailing). 2m putts resulted in low MVC values (3.1% ) with little variation across swing phases. There was considerable individual variation in results – one tense subject averaged 11.0% biceps MVC through the 2m putting stroke and others recorded peak mean MVC biceps activations of 68.9% at 50m, 101.3% at 100m, and 111.3% at 150m. Discussion: Previous studies have investigated the role of rotator cuff, spine, and hip muscles during the golf swing however, to our knowledge, this is the first study that investigates the activation of biceps brachii. Many rehabilitation programs following a biceps tenodesis or repair allow active range against gravity and restrict strengthening exercises until 6 weeks, and this does not appear to be associated with any adverse outcome. Previous studies demonstrate a range of < 10% MVC is similar to the unloaded biceps brachii during walking(1), active elbow flexion with the hand positioned either in pronation or supination will produce MVC < 20% throughout range(2) and elbow flexion with a 4kg dumbbell can produce mean MVC’s of around 40%(3). Our study demonstrates that increasing activation is associated with the leading arm, increasing shot distance and the late follow-through phase. Although the cohort mean MVC of the biceps brachii is <10% through the full swing, variability is high and biceps activation reach peak mean MVC’s of over 100% in different swing phases for some individuals. Given these EMG values, caution is advised when advising patients post biceps procedures to return to long distance golf shots, particularly when the leading arm is involved. Even though it would appear that putting would be as safe as having an unloaded hand out of a sling following biceps procedures, the variability of activation patterns across different golfers would lead us to caution against accelerated golf rehabilitation in those who may be particularly tense golfers. The 50m short iron shot was too long to consider as a chip shot and more work can be done in this area to determine the safety of chipping.Keywords: electromyographic analysis, biceps brachii rupture, golf swing, tendon surgery
Procedia PDF Downloads 8195 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang
Abstract:
Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI
Procedia PDF Downloads 26894 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit
Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar
Abstract:
Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction
Procedia PDF Downloads 9693 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow
Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather
Abstract:
The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.Keywords: agglomeration, channel flow, DEM, LES, turbulence
Procedia PDF Downloads 31792 Hydroxyapatite Nanorods as Novel Fillers for Improving the Properties of PBSu
Authors: M. Nerantzaki, I. Koliakou, D. Bikiaris
Abstract:
This study evaluates the hypothesis that the incorporation of fibrous hydroxyapatite nanoparticles (nHA) with high crystallinity and high aspect ratio, synthesized by hydrothermal method, into Poly(butylene succinate) (PBSu), improves the bioactivity of the aliphatic polyester and affects new bone growth inhibiting resorption and enhancing bone formation. Hydroxyapatite nanorods were synthesized using a simple hydrothermal procedure. First, the HPO42- -containing solution was added drop-wise into the Ca2+-containing solution, while the molar ratio of Ca/P was adjusted at 1.67. The HA precursor was then treated hydrothermally at 200°C for 72 h. The resulting powder was characterized using XRD, FT-IR, TEM, and EDXA. Afterwards, PBSu nanocomposites containing 2.5wt% (nHA) were prepared by in situ polymerization technique for the first time and were examined as potential scaffolds for bone engineering applications. For comparison purposes composites containing either 2.5wt% micro-Bioglass (mBG) or 2.5wt% mBG-nHA were prepared and studied, too. The composite scaffolds were characterized using SEM, FTIR, and XRD. Mechanical testing (Instron 3344) and Contact Angle measurements were also carried out. Enzymatic degradation was studied in an aqueous solution containing a mixture of R. Oryzae and P. Cepacia lipases at 37°C and pH=7.2. In vitro biomineralization test was performed by immersing all samples in simulated body fluid (SBF) for 21 days. Biocompatibility was assessed using rat Adipose Stem Cells (rASCs), genetically modified by nucleofection with DNA encoding SB100x transposase and pT2-Venus-neo transposon expression plasmids in order to attain fluorescence images. Cell proliferation and viability of cells on the scaffolds were evaluated using fluoresce microscopy and MTT (3-(4,5-dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide) assay. Finally, osteogenic differentiation was assessed by staining rASCs with alizarine red using cetylpyridinium chloride (CPC) method. TEM image of the fibrous HAp nanoparticles, synthesized in the present study clearly showed the fibrous morphology of the synthesized powder. The addition of nHA decreased significantly the contact angle of the samples, indicating that the materials become more hydrophilic and hence they absorb more water and subsequently degrade more rapidly. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. Metabolic activity of rASCs on all PBSu composites was high and increased from day 1 of culture to day 14. On day 28 metabolic activity of rASCs cultured on samples enriched with bioceramics was significantly decreased due to possible differentiation of rASCs to osteoblasts. Staining rASCs with alizarin red after 28 days in culture confirmed our initial hypothesis as the presence of calcium was detected, suggesting osteogenic differentiation of rACS on PBSu/nHAp/mBG 2.5% and PBSu/mBG 2.5% composite scaffolds.Keywords: biomaterials, hydroxyapatite nanorods, poly(butylene succinate), scaffolds
Procedia PDF Downloads 30891 Reinforcement of Calcium Phosphate Cement with E-Glass Fibre
Authors: Kanchan Maji, Debasmita Pani, Sudip Dasgupta
Abstract:
Calcium phosphate cement (CPC) due to its high bioactivity and optimum bioresorbability shows excellent bone regeneration capability. Despite it has limited applications as bone implant due to its macro-porous microstructure causing its poor mechanical strength. The reinforcement of apatitic CPCs with biocompatible fibre glass phase is an attractive area of research to improve its mechanical strength. Here we study the setting behaviour of Si-doped and un-doped alpha tri-calcium phosphate (α-TCP) based CPC and its reinforcement with the addition of E-glass fibre. Alpha tri-calcium phosphate powders were prepared by solid state sintering of CaCO3, CaHPO4 and tetra ethyl ortho silicate (TEOS) was used as silicon source to synthesise Si doped α-TCP powders. Alpha tri-calcium phosphate based CPC hydrolyzes to form hydroxyapatite (HA) crystals having excellent osteoconductivity and bone-replacement capability thus self-hardens through the entanglement of HA crystals. Setting time, phase composition, hydrolysis conversion rate, microstructure, and diametral tensile strength (DTS) of un-doped CPC and Si-doped CPC were studied and compared. Both initial and final setting time of the developed cement was delayed because of Si addition. Crystalline phases of HA (JCPDS 9-432), α-TCP (JCPDS 29-359) and β-TCP (JCPDS 9-169) were detected in the X-ray diffraction (XRD) pattern after immersion of CPC in simulated body fluid (SBF) for 0 hours to 10 days. The intensities of the α-TCP peaks of (201) and (161) at 2θ of 22.2°and 24.1° decreased when the time of immersion of CPC in SBF increased from 0 hours to 10 days, due to its transformation into HA. As Si incorporation in the crystal lattice stabilised the TCP phase, Si doped CPC showed a little slower rate of conversion into HA phase as compared to un-doped CPC. The SEM image of the microstructure of hardened CPC showed lower grain size of HA in un-doped CPC because of premature setting and faster hydrolysis of un-doped CPC in SBF as compared that in Si-doped CPC. Premature setting caused generation of micro and macro porosity in un-doped CPC structure which resulted in its lower mechanical strength as compared to that in Si-doped CPC. This lower porosity and greater compactness in the microstructure attributes to greater DTS values observed in Si-doped CPC. E-glass fibres of the average diameter of 12 μm were cut into approximately 1 mm in length and immersed in SBF to deposit carbonated apatite on its surface. This was performed to promote HA crystal growth and entanglement along the fibre surface to promote stronger interface between dispersed E-glass fibre and CPC matrix. It was found that addition of 10 wt% of E-glass fibre into Si-doped α-TCP increased the average DTS of CPC from 8 MPa to 15 MPa as the fibres could resist the propagation of crack by deflecting the crack tip. Our study shows that biocompatible E-glass fibre in optimum proportion in CPC matrix can enhance the mechanical strength of CPC without affecting its bioactivity.Keywords: Calcium phosphate cement, biocompatibility, e-glass fibre, diametral tensile strength
Procedia PDF Downloads 34690 A Dynamic Model for Circularity Assessment of Nutrient Recovery from Domestic Sewage
Authors: Anurag Bhambhani, Jan Peter Van Der Hoek, Zoran Kapelan
Abstract:
The food system depends on the availability of Phosphorus (P) and Nitrogen (N). Growing population, depleting Phosphorus reserves and energy-intensive industrial nitrogen fixation are threats to their future availability. Recovering P and N from domestic sewage water offers a solution. Recovered P and N can be applied to agricultural land, replacing virgin P and N. Thus, recovery from sewage water offers a solution befitting a circular economy. To ensure minimum waste and maximum resource efficiency a circularity assessment method is crucial to optimize nutrient flows and minimize losses. Material Circularity Indicator (MCI) is a useful method to quantify the circularity of materials. It was developed for materials that remain within the market and recently extended to include biotic materials that may be composted or used for energy recovery after end-of-use. However, MCI has not been used in the context of nutrient recovery. Besides, MCI is time-static, i.e., it cannot account for dynamic systems such as the terrestrial nutrient cycles. Nutrient application to agricultural land is a highly dynamic process wherein flows and stocks change with time. The rate of recycling of nutrients in nature can depend on numerous factors such as prevailing soil conditions, local hydrology, the presence of animals, etc. Therefore, a dynamic model of nutrient flows with indicators is needed for the circularity assessment. A simple substance flow model of P and N will be developed with the help of flow equations and transfer coefficients that incorporate the nutrient recovery step along with the agricultural application, the volatilization and leaching processes, plant uptake and subsequent animal and human uptake. The model is then used for calculating the proportions of linear and restorative flows (coming from reused/recycled sources). The model will simulate the adsorption process based on the quantity of adsorbent and nutrient concentration in the water. Thereafter, the application of the adsorbed nutrients to agricultural land will be simulated based on adsorbate release kinetics, local soil conditions, hydrology, vegetation, etc. Based on the model, the restorative nutrient flow (returning to the sewage plant following human consumption) will be calculated. The developed methodology will be applied to a case study of resource recovery from wastewater. In the aforementioned case study located in Italy, biochar or zeolite is to be used for recovery of P and N from domestic sewage through adsorption and thereafter, used as a slow-release fertilizer in agriculture. Using this model, information regarding the efficiency of nutrient recovery and application can be generated. This can help to optimize the recovery process and application of the nutrients. Consequently, this will help to optimize nutrient recovery and application and reduce the dependence of the food system on the virgin extraction of P and N.Keywords: circular economy, dynamic substance flow, nutrient cycles, resource recovery from water
Procedia PDF Downloads 19789 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake
Authors: Zhang Xin
Abstract:
Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS
Procedia PDF Downloads 12788 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images
Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget
Abstract:
In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).Keywords: agricultural practices, remote sensing, rice, yield
Procedia PDF Downloads 27487 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs
Authors: Regina A. Tayong, Reza Barati
Abstract:
A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation
Procedia PDF Downloads 13086 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame
Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin
Abstract:
The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.Keywords: FACTS, multi-space-time frame, optimal control, TCSC
Procedia PDF Downloads 26785 TRAC: A Software Based New Track Circuit for Traffic Regulation
Authors: Jérôme de Reffye, Marc Antoni
Abstract:
Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling
Procedia PDF Downloads 331