Search results for: high-speed rotation operation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3363

Search results for: high-speed rotation operation

423 FEM and Experimental Modal Analysis of Computer Mount

Authors: Vishwajit Ghatge, David Looper

Abstract:

Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.

Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration

Procedia PDF Downloads 321
422 Investigating Effects of Vehicle Speed and Road PSDs on Response of a 35-Ton Heavy Commercial Vehicle (HCV) Using Mathematical Modelling

Authors: Amal G. Kurian

Abstract:

The use of mathematical modeling has seen a considerable boost in recent times with the development of many advanced algorithms and mathematical modeling capabilities. The advantages this method has over other methods are that they are much closer to standard physics theories and thus represent a better theoretical model. They take lesser solving time and have the ability to change various parameters for optimization, which is a big advantage, especially in automotive industry. This thesis work focuses on a thorough investigation of the effects of vehicle speed and road roughness on a heavy commercial vehicle ride and structural dynamic responses. Since commercial vehicles are kept in operation continuously for longer periods of time, it is important to study effects of various physical conditions on the vehicle and its user. For this purpose, various experimental as well as simulation methodologies, are adopted ranging from experimental transfer path analysis to various road scenario simulations. To effectively investigate and eliminate several causes of unwanted responses, an efficient and robust technique is needed. Carrying forward this motivation, the present work focuses on the development of a mathematical model of a 4-axle configuration heavy commercial vehicle (HCV) capable of calculating responses of the vehicle on different road PSD inputs and vehicle speeds. Outputs from the model will include response transfer functions and PSDs and wheel forces experienced. A MATLAB code will be developed to implement the objectives in a robust and flexible manner which can be exploited further in a study of responses due to various suspension parameters, loading conditions as well as vehicle dimensions. The thesis work resulted in quantifying the effect of various physical conditions on ride comfort of the vehicle. An increase in discomfort is seen with velocity increase; also the effect of road profiles has a considerable effect on comfort of the driver. Details of dominant modes at each frequency are analysed and mentioned in work. The reduction in ride height or deflection of tire and suspension with loading along with load on each axle is analysed and it is seen that the front axle supports a greater portion of vehicle weight while more of payload weight comes on fourth and third axles. The deflection of the vehicle is seen to be well inside acceptable limits.

Keywords: mathematical modeling, HCV, suspension, ride analysis

Procedia PDF Downloads 259
421 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis

Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio

Abstract:

Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.

Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction

Procedia PDF Downloads 310
420 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business

Authors: Kritchakhris Na-Wattanaprasert

Abstract:

The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.

Keywords: key performance indicator, warehouse management, warehouse operation, logistics management

Procedia PDF Downloads 432
419 The Maps of Meaning (MoM) Consciousness Theory

Authors: Scott Andersen

Abstract:

Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.

Keywords: consciousness, perception, prospection, embodiment

Procedia PDF Downloads 62
418 Effective Apixaban Clearance with Cytosorb Extracorporeal Hemoadsorption

Authors: Klazina T. Havinga, Hilde R. H. de Geus

Abstract:

Introduction: Pre-operative coagulation management of Apixaban prescribed patients, a new oral anticoagulant (a factor Xa inhibitor), is difficult, especially when chronic kidney disease (CKD) causes drug overdose. Apixaban is not dialyzable due to its high level of protein binding. An antidote, Andexanet α, is available but expensive and has an unfavorable short half-life. We report the successful extracorporeal removal of Apixaban prior to emergency surgery with the CytoSorb® Hemoadsorption device. Methods: A 89-year-old woman with CKD, with an Apixaban prescription for atrial fibrillation, was presented at the ER with traumatic rib fractures, a flail chest, and an unstable spinal fracture (T12) for which emergency surgery was indicated. However, due to very high Apixaban levels, this surgery had to be postponed. Based on the Apixaban-specific anti-factor Xa activity (AFXaA) measurements at admission and 10 hours later, complete clearance was expected after 48 hours. In order to enhance the Apixaban removal and reduce the time to operation, and therefore reduce pulmonary complications, CRRT with CytoSorb® cartridge was initiated. Apixaban-specific anti-factor Xa activity (AFXaA) was measured frequently as a substitute for Apixaban drug concentrations, pre- and post adsorber, in order to calculate the adsorber-related clearance. Results: The admission AFXaA concentration, as a substitute for Apixaban drug levels, was 218 ng/ml, which decreased to 157 ng/ml after ten hours. Due to sustained anticoagulation effects, surgery was again postponed. However, the AFXaA levels decreased quickly to sub-therapeutic levels after CRRT (Multifiltrate Pro, Fresenius Medical Care, Blood flow 200 ml/min, Dialysate Flow 4000 ml/h, Prescribed renal dose 51 ml-kg-h) with Cytosorb® connected in series into the circuit was initiated (within 5 hours). The adsorber-related (indirect) Apixaban clearance was calculated every half hour (Cl=Qe * (AFXaA pre- AFXaA post/ AFXaA pre) with Qe=plasma flow rate calculated with Ht=0.38 and system blood flow rate 200 ml-min): 100 ml/min, 72 ml/min and 57 ml/min. Although, as expected, the adsorber-related clearance decreased quickly due to saturation of the beads, still the reduction rate achieved resulted in a very rapid decrease in AFXaA levels. Surgery was ordered and possible within 5 hours after Cytosorb initiation. Conclusion: The CytoSorb® Hemoadsorption device enabled rapid correction of Apixaban associated anticoagulation.

Keywords: Apixaban, CytoSorb, emergency surgery, Hemoadsorption

Procedia PDF Downloads 158
417 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study

Authors: Ndibarefinia Tobin

Abstract:

The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.

Keywords: knowledge management, whole life costing, construction industry, knowledge

Procedia PDF Downloads 244
416 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 298
415 Analysis of Lift Force in Hydrodynamic Transport of a Finite Sized Particle in Inertial Microfluidics with a Rectangular Microchannel

Authors: Xinghui Wu, Chun Yang

Abstract:

Inertial microfluidics is a competitive fluidic method with applications in separation of particles, cells and bacteria. In contrast to traditional microfluidic devices with low Reynolds number, inertial microfluidics works in the intermediate Re number range which brings about several intriguing inertial effects on particle separation/focusing to meet the throughput requirement in the real-world. Geometric modifications to make channels become irregular shapes can leverage fluid inertia to create complex secondary flow for adjusting the particle equilibrium positions and thus enhance the separation resolution and throughput. Although inertial microfluidics has been extensively studied by experiments, our current understanding of its mechanisms is poor, making it extremely difficult to build rational-design guidelines for the particle focusing locations, especially for irregularly shaped microfluidic channels. Inertial particle microfluidics in irregularly shaped channels were investigated in our group. There are several fundamental issues that require us to address. One of them is about the balance between the inertial lift forces and the secondary drag forces. Also, it is critical to quantitatively describe the dependence of the life forces on particle-particle interactions in irregularly shaped channels, such as a rectangular one. To provide physical insights into the inertial microfluidics in channels of irregular shapes, in this work the immersed boundary-lattice Boltzmann method (IB-LBM) was introduced and validated to explore the transport characteristics and the underlying mechanisms of an inertial focusing single particle in a rectangular microchannel. The transport dynamics of a finitesized particle were investigated over wide ranges of Reynolds number (20 < Re < 500) and particle size. The results show that the inner equilibrium positions are more difficult to occur in the rectangular channel, which can be explained by the secondary flow caused by the presence of a finite-sized particle. Furthermore, force decoupling analysis was utilized to study the effect of each type of lift force on the inertia migration, and a theoretical model for the lateral lift force of a finite-sized particle in the rectangular channel was established. Such theoretical model can be used to provide theoretical guidance for the design and operation of inertial microfluidics.

Keywords: inertial microfluidics, particle focuse, life force, IB-LBM

Procedia PDF Downloads 72
414 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 300
413 Understanding the Effect of Material and Deformation Conditions on the “Wear Mode Diagram”: A Numerical Study

Authors: A. Mostaani, M. P. Pereira, B. F. Rolfe

Abstract:

The increasing application of Advanced High Strength Steel (AHSS) in the automotive industry to fulfill crash requirements has introduced higher levels of wear in stamping dies and parts. Therefore, understanding wear behaviour in sheet metal forming is of great importance as it can help to reduce the high costs currently associated with tool wear. At the contact between the die and the sheet, the tips of hard tool asperities interact with the softer sheet material. Understanding the deformation that occurs during this interaction is important for our overall understanding of the wear mechanisms. For these reasons, the scratching of a perfectly plastic material by a rigid indenter has been widely examined in the literature; with finite element modelling (FEM) used in recent years to further understand the behaviour. The ‘wear mode diagram’ has been commonly used to classify the deformation regime of the soft work-piece during scratching, into three modes: ploughing, wedge formation, and cutting. This diagram, which is based on 2D slip line theory and upper bound method for perfectly plastic work-piece and rigid indenter, relates different wear modes to attack angle and interfacial strength. This diagram has been the basis for many wear studies and wear models to date. Additionally, it has been concluded that galling is most likely to occur during the wedge formation mode. However, there has been little analysis in the literature of how the material behaviour and deformation conditions associated with metal forming processes influence the wear behaviour. Therefore, the first aim of this work is first to use a commercial FEM package (Abaqus/Explicit) to build a 3D model to capture wear modes during scratching with indenters with different attack angles and different interfacial strengths. The second goal is to utilise the developed model to understand how wear modes might change in the presence of bulk deformation of the work-piece material as a result of the metal forming operation. Finally, the effect of the work-piece material properties, including strain hardening, will be examined to understand how these influence the wear modes and wear behaviour. The results show that both strain hardening and substrate deformation can change the critical attack angle at which the wedge formation regime is activated.

Keywords: finite element, pile-up, scratch test, wear mode

Procedia PDF Downloads 329
412 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm

Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali

Abstract:

Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.

Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir

Procedia PDF Downloads 267
411 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation

Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov

Abstract:

Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.

Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery

Procedia PDF Downloads 110
410 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints

Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno

Abstract:

Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.

Keywords: battery energy storage, power system stability, system strength, weak power system

Procedia PDF Downloads 61
409 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern

Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li

Abstract:

Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.

Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.

Procedia PDF Downloads 20
408 CuIn₃Se₅ Colloidal Nanocrystals and Its Ink-Coated Films for Photovoltaics

Authors: M. Ghali, M. Elnimr, G. F. Ali, A. M. Eissa, H. Talaat

Abstract:

CuIn₃Se₅ material is indexed as ordered vacancy compounds having excellent matching properties with CuInGaSe (CIGS) solar absorber layer. For example, the valence band offset of CuIn₃Se₅ with CIGS is nearly 0.3 eV, and the lattice mismatch is less than 1%, besides the absence of discontinuity in their conduction bands. Thus, CuIn₃Se₅ can work as a passivation layer for repelling holes from CIGS/CdS interface and hence to reduce the interface carriers recombination and consequently enhancing the efficiency of CIGS/CdS solar cells. Theoretically, it was reported earlier that an improvement in the efficiency of p-CIGS-based solar cell with a thin ~100 nm of n-CuIn₃Se₅ layer is expected. Recently, a reported experiment demonstrated significant improvement in the efficiency of Molecular Beam Epitaxy (MBE) grown CIGS solar cells from 13.4 to 14.5% via inserting a thin layer of MBE-grown Cu(In,Ga)₃Se₅ layer at the CdS/CIGS interface. It should be mentioned that CuIn₃Se₅ material in either bulk or thin film form, are usually fabricated by high vacuum physical vapor deposition techniques (e.g., three-source co-evaporation, RF sputtering, flash evaporation, and molecular beam epitaxy). In addition, achieving photosensitive films of n-CuIn₃Se₅ material is important for new hybrid organic/inorganic structures, where inorganic photo-absorber layer, with n-type conductivity, can form n–p junction with organic p-type material (e.g., conductive polymers). A detailed study of the physical properties of CuIn₃Se₅ is still necessary for better understanding of device operation and further improvement of solar cells performance. Here, we report on the low-cost synthesis of CuIn₃Se₅ material in nano-scale size, with an average diameter ~10nm, using simple solution-based colloidal chemistry. In contrast to traditionally grown bulk tetragonal CuIn₃Se₅ crystals using high Vacuum-based technology, our colloidal CuIn₃Se₅ nanocrystals show cubic crystal structure with a shape of nanoparticles and band gap ~1.33 eV. Ink-coated thin films prepared from these nanocrystals colloids; display n-type character, 1.26 eV band gap and strong photo-responsive behavior with incident white light. This suggests the potential use of colloidal CuIn₃Se₅ as an active layer in all-solution-processed thin film solar cells.

Keywords: nanocrystals, CuInSe, thin film, optical properties

Procedia PDF Downloads 155
407 Effect of Downstream Pressure in Tuning the Flow Control Orifices of Pressure Fed Reaction Control System Thrusters

Authors: Prakash M.N, Mahesh G, Muhammed Rafi K.M, Shiju P. Nair

Abstract:

Introduction: In launch vehicle missions, Reaction Control thrusters are being used for the three-axis stabilization of the vehicle during the coasting phases. A pressure-fed propulsion system is used for the operation of these thrusters due to its less complexity. In liquid stages, these thrusters are designed to draw propellant from the same tank used for the main propulsion system. So in order to regulate the propellant flow rates of these thrusters, flow control orifices are used in feed lines. These orifices are calibrated separately as per the flow rate requirement of individual thrusters for the nominal operating conditions. In some missions, it was observed that the thrusters were operated at higher thrust than nominal. This point was addressed through a series of cold flow and hot tests carried out in-ground and this paper elaborates the details of the same. Discussion: In order to find out the exact reason for this phenomenon, two flight configuration thrusters were identified and hot tested in the ground with calibrated orifices and feed lines. During these tests, the chamber pressure, which is directly proportional to the thrust, is measured. In both cases, chamber pressures higher than the nominal by 0.32bar to 0.7bar were recorded. The increase in chamber pressure is due to an increase in the oxidizer flow rate of both the thrusters. Upon further investigation, it is observed that the calibration of the feed line is done with ambient pressure downstream. But in actual flight conditions, the orifices will be subjected to operate with 10 to 11bar pressure downstream. Due to this higher downstream pressure, the flow through the orifices increases and thereby, the thrusters operate with higher chamber pressure values. Conclusion: As part of further investigatory tests, two numbers of fresh thrusters were realized. Orifice tuning of these thrusters was carried out in three different ways. In the first trial, the orifice tuning was done by simulating 1bar pressure downstream. The second trial was done with the injector assembled downstream. In the third trial, the downstream pressure equal to the flight injection pressure was simulated downstream. Using these calibrated orifices, hot tests were carried out in simulated vacuum conditions. Chamber pressure and flow rate values were exactly matching with the prediction for the second and third trials. But for the first trial, the chamber pressure values obtained in the hot test were more than the prediction. This clearly shows that the flow is detached in the 1st trial and attached for the 2nd & 3rd trials. Hence, the error in tuning the flow control orifices is pinpointed as the reason for this higher chamber pressure observed in flight.

Keywords: reaction control thruster, propellent, orifice, chamber pressure

Procedia PDF Downloads 201
406 Proposed Design of an Optimized Transient Cavity Picosecond Ultraviolet Laser

Authors: Marilou Cadatal-Raduban, Minh Hong Pham, Duong Van Pham, Tu Nguyen Xuan, Mui Viet Luong, Kohei Yamanoi, Toshihiko Shimizu, Nobuhiko Sarukura, Hung Dai Nguyen

Abstract:

There is a great deal of interest in developing all-solid-state tunable ultrashort pulsed lasers emitting in the ultraviolet (UV) region for applications such as micromachining, investigation of charge carrier relaxation in conductors, and probing of ultrafast chemical processes. However, direct short-pulse generation is not as straight forward in solid-state gain media as it is for near-IR tunable solid-state lasers such as Ti:sapphire due to the difficulty of obtaining continuous wave laser operation, which is required for Kerr lens mode-locking schemes utilizing spatial or temporal Kerr type nonlinearity. In this work, the transient cavity method, which was reported to generate ultrashort laser pulses in dye lasers, is extended to a solid-state gain medium. Ce:LiCAF was chosen among the rare-earth-doped fluoride laser crystals emitting in the UV region because of its broad tunability (from 280 to 325 nm) and enough bandwidth to generate 3-fs pulses, sufficiently large effective gain cross section (6.0 x10⁻¹⁸ cm²) favorable for oscillators, and a high saturation fluence (115 mJ/cm²). Numerical simulations are performed to investigate the spectro-temporal evolution of the broadband UV laser emission from Ce:LiCAF, represented as a system of two homogeneous broadened singlet states, by solving the rate equations extended to multiple wavelengths. The goal is to find the appropriate cavity length and Q-factor to achieve the optimal photon cavity decay time and pumping energy for resonator transients that will lead to ps UV laser emission from a Ce:LiCAF crystal pumped by the fourth harmonics (266nm) of a Nd:YAG laser. Results show that a single ps pulse can be generated from a 1-mm, 1 mol% Ce³⁺-doped LiCAF crystal using an output coupler with 10% reflectivity (low-Q) and an oscillator cavity that is 2-mm long (short cavity). This technique can be extended to other fluoride-based solid-state laser gain media.

Keywords: rare-earth-doped fluoride gain medium, transient cavity, ultrashort laser, ultraviolet laser

Procedia PDF Downloads 359
405 A Rare Cause of Abdominal Pain Post Caesarean Section

Authors: Madeleine Cox

Abstract:

Objective: discussion of diagnosis of vernix caseosa peritonitis, recovery and subsequent caesarean seciton Case: 30 year old G4P1 presented in labour at 40 weeks, planning a vaginal birth afterprevious caesarean section. She underwent an emergency caesarean section due to concerns for fetal wellbeing on CTG. She was found to have a thin lower segment with a very small area of dehiscence centrally. The operation was uncomplicated, and she recovered and went home 2 days later. She then represented to the emergency department day 6 post partum feeling very unwell, with significant abdominal pain, tachycardia as well as urinary retention. Raised white cell count of 13.7 with neutrophils of 11.64, CRP of 153. An abdominal ultrasound was poorly tolerated by the patient and did not aide in the diagnosis. Chest and abdominal xray were normal. She underwent a CT chest and abdomen, which found a small volume of free fluid with no apparent collection. Given no obvious cause of her symptoms were found and the patient did not improve, she had a repeat CT 2 days later, which showed progression of free fluid. A diagnostic laparoscopy was performed with general surgeons, which reveled turbid fluid, an inflamed appendix which was removed. The patient improved remarkably post operatively. The histology showed periappendicitis with acute appendicitis with marked serosal inflammatory reaction to vernix caseosa. Following this, the patient went on to recover well. 4 years later, the patient was booked for an elective caesarean section, on entry into the abdomen, there were very minimal adhesions, and the surgery and her subsequent recovery was uncomplicated. Discussion: this case represents the diagnostic dilemma of a patient who presents unwell without a clear cause. In this circumstance, multiple modes of imaging did not aide in her diagnosis, and so she underwent diagnostic surgery. It is important to evaluate if a patient is or is not responding to the typical causes of post operative pain and adjust management accordingly. A multiteam approach can help to provide a diagnosis for these patients. Conclusion: Vernix caseosa peritonitis is a rare cause of acute abdomen post partum. There are few reports in the literature of the initial presentation and no reports on the possible effects on future pregnancies. This patient did not have any complications in her following pregnancy or delivery secondary to her diagnosis of vernix caseosa peritonitis. This may assist in counselling other women who have had this uncommon diagnosis.

Keywords: peritonitis, obstetrics, caesarean section, pain

Procedia PDF Downloads 106
404 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling

Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng

Abstract:

This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.

Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT

Procedia PDF Downloads 87
403 Ethiopian Textile and Apparel Industry: Study of the Information Technology Effects in the Sector to Improve Their Integrity Performance

Authors: Merertu Wakuma Rundassa

Abstract:

Global competition and rapidly changing customer requirements are forcing major changes in the production styles and configuration of manufacturing organizations. Increasingly, traditional centralized and sequential manufacturing planning, scheduling, and control mechanisms are being found insufficiently flexible to respond to changing production styles and highly dynamic variations in product requirements. The traditional approaches limit the expandability and reconfiguration capabilities of the manufacturing systems. Thus many business houses face increasing pressure to lower production cost, improve production quality and increase responsiveness to customers. In a textile and apparel manufacturing, globalization has led to increase in competition and quality awareness and these industries have changed tremendously in the last few years. So, to sustain competitive advantage, companies must re-examine and fine-tune their business processes to deliver high quality goods at very low costs and it has become very important for the textile and apparel industries to integrate themselves with information technology to survive. IT can create competitive advantages for companies to improve coordination and communication among trading partners, increase the availability of information for intermediaries and customers and provide added value at various stages along the entire chain. Ethiopia is in the process of realizing its potential as the future sourcing location for the global textile and garments industry. With a population of over 90 million people and the fastest growing non-oil economy in Africa, Ethiopia today represents limitless opportunities for international investors. For the textile and garments industry Ethiopia promises a low cost production location with natural resources such as cotton to enable the setup of vertically integrated textile and garment operation. However; due to lack of integration of their business activities textile and apparel industry of Ethiopia faced a problem in that it can‘t be competent in the global market. On the other hand the textile and apparel industries of other countries have changed tremendously in the last few years and globalization has led to increase in competition and quality awareness. So the aim of this paper is to study the trend of Ethiopian Textile and Apparel Industry on the application of different IT system to integrate them in the global market.

Keywords: information technology, business integrity, textile and apparel industries, Ethiopia

Procedia PDF Downloads 364
402 Fuzzy Expert Approach for Risk Mitigation on Functional Urban Areas Affected by Anthropogenic Ground Movements

Authors: Agnieszka A. Malinowska, R. Hejmanowski

Abstract:

A number of European cities are strongly affected by ground movements caused by anthropogenic activities or post-anthropogenic metamorphosis. Those are mainly water pumping, current mining operation, the collapse of post-mining underground voids or mining-induced earthquakes. These activities lead to large and small-scale ground displacements and a ground ruptures. The ground movements occurring in urban areas could considerably affect stability and safety of structures and infrastructures. The complexity of the ground deformation phenomenon in relation to the structures and infrastructures vulnerability leads to considerable constraints in assessing the threat of those objects. However, the increase of access to the free software and satellite data could pave the way for developing new methods and strategies for environmental risk mitigation and management. Open source geographical information systems (OS GIS), may support data integration, management, and risk analysis. Lately, developed methods based on fuzzy logic and experts methods for buildings and infrastructure damage risk assessment could be integrated into OS GIS. Those methods were verified base on back analysis proving their accuracy. Moreover, those methods could be supported by ground displacement observation. Based on freely available data from European Space Agency and free software, ground deformation could be estimated. The main innovation presented in the paper is the application of open source software (OS GIS) for integration developed models and assessment of the threat of urban areas. Those approaches will be reinforced by analysis of ground movement based on free satellite data. Those data would support the verification of ground movement prediction models. Moreover, satellite data will enable our mapping of ground deformation in urbanized areas. Developed models and methods have been implemented in one of the urban areas hazarded by underground mining activity. Vulnerability maps supported by satellite ground movement observation would mitigate the hazards of land displacements in urban areas close to mines.

Keywords: fuzzy logic, open source geographic information science (OS GIS), risk assessment on urbanized areas, satellite interferometry (InSAR)

Procedia PDF Downloads 160
401 Wind Generator Control in Isolated Site

Authors: Glaoui Hachemi

Abstract:

Wind has been proven as a cost effective and reliable energy source. Technological advancements over the last years have placed wind energy in a firm position to compete with conventional power generation technologies. Algeria has a vast uninhabited land area where the south (desert) represents the greatest part with considerable wind regime. In this paper, an analysis of wind energy utilization as a viable energy substitute in six selected sites widely distributed all over the south of Algeria is presented. In this presentation, wind speed frequency distributions data obtained from the Algerian Meteorological Office are used to calculate the average wind speed and the available wind power. The annual energy produced by the Fuhrlander FL 30 wind machine is obtained using two methods. The analysis shows that in the southern Algeria, at 10 m height, the available wind power was found to vary between 160 and 280 W/m2, except for Tamanrasset. The highest potential wind power was found at Adrar, with 88 % of the time the wind speed is above 3 m/s. Besides, it is found that the annual wind energy generated by that machine lie between 33 and 61 MWh, except for Tamanrasset, with only 17 MWh. Since the wind turbines are usually installed at a height greater than 10 m, an increased output of wind energy can be expected. However, the wind resource appears to be suitable for power production on the south and it could provide a viable substitute to diesel oil for irrigation pumps and electricity generation. In this paper, a model of the wind turbine (WT) with permanent magnet generator (PMSG) and its associated controllers is presented. The increase of wind power penetration in power systems has meant that conventional power plants are gradually being replaced by wind farms. In fact, today wind farms are required to actively participate in power system operation in the same way as conventional power plants. In fact, power system operators have revised the grid connection requirements for wind turbines and wind farms, and now demand that these installations be able to carry out more or less the same control tasks as conventional power plants. For dynamic power system simulations, the PMSG wind turbine model includes an aerodynamic rotor model, a lumped mass representation of the drive train system and generator model. In this paper, we propose a model with an implementation in MATLAB / Simulink, each of the system components off-grid small wind turbines.

Keywords: windgenerator systems, permanent magnet synchronous generator (PMSG), wind turbine (WT) modeling, MATLAB simulink environment

Procedia PDF Downloads 338
400 Identifying the Determinants of Compliance with Maritime Environmental Legislation in the North and Baltic Sea Area: A Model Developed from Exploratory Qualitative Data Collection

Authors: Thea Freese, Michael Gille, Andrew Hursthouse, John Struthers

Abstract:

Ship operators on the North and Baltic Sea have been experiencing increased political interest in marine environmental protection and cleaner vessel operations. Stricter legislation on SO2 and NOx emissions, ballast water management and other measures of protection are currently being phased in or will come into force in the coming years. These measures benefit the health of the marine environment, while increasing company’s operational costs. In times of excess shipping capacity and linked consolidation in the industry non-compliance with environmental rules is one way companies might hope to stay competitive with both intra- and inter-modal trade. Around 5-15% of industry participants are believed to neglect laws on vessel-source pollution willingly or unwillingly. Exploratory in-depth interviews conducted with 12 experts from various stakeholder groups informed the researchers about variables influencing compliance levels, including awareness and apprehension, willingness to comply, ability to comply and effectiveness of controls. Semi-structured expert interviews were evaluated using qualitative content analysis. A model of determinants of compliance was developed and is presented here. While most vessel operators endeavour to achieve full compliance with environmental rules, a lack of availability of technical solutions, expediency of implementation and operation and economic feasibility might prove a hindrance. Ineffective control systems on the other hand foster willing non-compliance. With respect to motivations, lacking time, lacking financials and the absence of commercial advantages decrease compliance levels. These and other variables were inductively developed from qualitative data and integrated into a model on environmental compliance. The outcomes presented here form part of a wider research project on economic effects of maritime environmental legislation. Research on determinants of compliance might inform policy-makers about actual behavioural effects of shipping companies and might further the development of a comprehensive legal system for environmental protection.

Keywords: compliance, marine environmental protection, exploratory qualitative research study, clean vessel operations, North and Baltic Sea area

Procedia PDF Downloads 383
399 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 144
398 An Autonomous Space Debris-Removal System for Effective Space Missions

Authors: Shriya Chawla, Vinayak Malhotra

Abstract:

Space exploration has noted an exponential rise in the past two decades. The world has started probing the alternatives for efficient and resourceful sustenance along with utilization of advanced technology viz., satellites on earth. Space propulsion forms the core of space exploration. Of all the issues encountered, space debris has increasingly threatened the space exploration and propulsion. The efforts have resulted in the presence of disastrous space debris fragments orbiting the earth at speeds up to several kilometres per hour. Debris are well known as a potential damage to the future missions with immense loss of resources, mankind, and huge amount of money is invested in active research on them. Appreciable work had been done in the past relating to active space debris-removal technologies such as harpoon, net, drag sail. The primary emphasis is laid on confined removal. In recently, remove debris spacecraft was used for servicing and capturing cargo ships. Airbus designed and planned the debris-catching net experiment, aboard the spacecraft. The spacecraft represents largest payload deployed from the space station. However, the magnitude of the issue suggests that active space debris-removal technologies, such as harpoons and nets, still would not be enough. Thus, necessitating the need for better and operative space debris removal system. Techniques based on diverting the path of debris or the spacecraft to avert damage have turned out minimal usage owing to limited predictions. Present work focuses on an active hybrid space debris removal system. The work is motivated by the need to have safer and efficient space missions. The specific objectives of the work are 1) to thoroughly analyse the existing and conventional debris removal techniques, their working, effectiveness and limitations under varying conditions, 2) to understand the role of key controlling parameters in coupled operation of debris capturing and removal. The system represents the utilization of the latest autonomous technology available with an adaptable structural design for operations under varying conditions. The design covers advantages of most of the existing technologies while removing the disadvantages. The system is likely to enhance the probability of effective space debris removal. At present, systematic theoretical study is being carried out to thoroughly observe the effects of pseudo-random debris occurrences and to originate an optimal design with much better features and control.

Keywords: space exploration, debris removal, space crafts, space accidents

Procedia PDF Downloads 170
397 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis

Authors: Elcin Timur Cakmak, Ayse Oguzlar

Abstract:

This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.

Keywords: classification algorithms, machine learning, sentiment analysis, Twitter

Procedia PDF Downloads 75
396 Surgical Hip Dislocation of Femoroacetabular Impingement: Survivorship and Functional Outcomes at 10 Years

Authors: L. Hoade, O. O. Onafowokan, K. Anderson, G. E. Bartlett, E. D. Fern, M. R. Norton, R. G. Middleton

Abstract:

Aims: Femoroacetabular impingement (FAI) was first recognised as a potential driver for hip pain at the turn of the last millennium. While there is an increasing trend towards surgical management of FAI by arthroscopic means, open surgical hip dislocation and debridement (SHD) remains the Gold Standard of care in terms of reported outcome measures. (1) Long-term functional and survivorship outcomes of SHD as a treatment for FAI are yet to be sufficiently reported in the literature. This study sets out to help address this imbalance. Methods: We undertook a retrospective review of our institutional database for all patients who underwent SHD for FAI between January 2003 and December 2008. A total of 223 patients (241 hips) were identified and underwent a ten year review with a standardised radiograph and patient-reported outcome measures questionnaire. The primary outcome measure of interest was survivorship, defined as progression to total hip arthroplasty (THA). Negative predictive factors were analysed. Secondary outcome measures of interest were survivorship to further (non-arthroplasty) surgery, functional outcomes as reflected by patient reported outcome measure scores (PROMS) scores, and whether a learning curve could be identified. Results: The final cohort consisted of 131 females and 110 males, with a mean age of 34 years. There was an overall native hip joint survival rate of 85.4% at ten years. Those who underwent a THA were significantly older at initial surgery, had radiographic evidence of preoperative osteoarthritis and pre- and post-operative acetabular undercoverage. In those whom had not progressed to THA, the average Non-arthritic Hip Score and Oxford Hip Score at ten year follow-up were 72.3% and 36/48, respectively, and 84% still deemed their surgery worthwhile. A learning curve was found to exist that was predicated on case selection rather than surgical technique. Conclusion: This is only the second study to evaluate the long-term outcomes (beyond ten years) of SHD for FAI and the first outside the originating centre. Our results suggest that, with correct patient selection, this remains an operation with worthwhile outcomes at ten years. How the results of open surgery compared to those of arthroscopy remains to be answered. While these results precede the advent of collison software modelling tools, this data helps set a benchmark for future comparison of other techniques effectiveness at the ten year mark.

Keywords: femoroacetabular impingement, hip pain, surgical hip dislocation, hip debridement

Procedia PDF Downloads 84
395 A Study of the Relationship among the Hotel Staff's Work Stress, Perceived Organizational Support, and Work Efficacy: A Case Study of Macao

Authors: Zhang Tao, Si Tang, Zhang Yufeng, Jin Jiahua

Abstract:

Work pressure is an emerging research of organizational behavior. Many factors associated with this study also attracted the interest of scholars. Macao is surrounding by open micro-capitalist economy which has a high internationalization level and Mature operation system. And there is no doubt that tourism and hotel service industry is the pillar of the Macao economy with the developing of the mainland individual tourist visa. More and more cities are willing to inclusive culture diversity which lead to the amount of inbound tourists present high-speed up trend cause the hotel industry has a strong customer base and development space. At the same time, the hotel staff is an important role in the service. However, affected by some adverse factors, the hotel staff face a variety of pressures. This study combs the concept and theory of pressures relevant influencing factors and puts forward the purpose of this research. The focus of this study will be organizational supported by work efficiency and work pressure, using qualitative and quantitative research methods. Through questionnaires and interviews, 10 hotels in Macao were selected and 500 questionnaires were distributed to the employees. Statistical analysis software SPSS was used for descriptive statistics. By exploratory factor analysis and confirmatory factor analysis, effect. And the relevant practitioners on behalf of the interview content analysis. The innovation of this research lies in the empirical study of the relationship between the working pressure, organizational support and working efficiency of Macau hotel practitioners, and constructs and validates the structural model of the relationship among them. This model will be helpful for people to use more research methods to study hotel practitioners pressure in the future. At the same time, we can draw the following conclusions: 1. There is a significant negative correlation between salary level and job stress; 2. There is a significant negative correlation between job stress and performance; 3. Different organizational support can interfere the relationship between job stress and performance; 4. Put forward the strategy of relevance adjustment, which provides a reference value for the hotel industry in human resource management. It would be helpful to improve their service standard by training their practitioners more scientifically and rationally.

Keywords: Macau, perceived organizational support, work stress, work efficiency

Procedia PDF Downloads 249
394 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling

Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer

Abstract:

The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.

Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor

Procedia PDF Downloads 283