Search results for: cataract operation
432 The Numerical Model of the Onset of Acoustic Oscillation in Pulse Tube Engine
Authors: Alexander I. Dovgyallo, Evgeniy A. Zinoviev, Svetlana O. Nekrasova
Abstract:
The most of works applied for the pulse tube converters contain the workflow description implemented through the use of mathematical models on stationary modes. However, the study of the thermoacoustic systems unsteady behavior in the start, stop, and acoustic load changes modes is in the particular interest. The aim of the present study was to develop a mathematical thermal excitation model of acoustic oscillations in pulse tube engine (PTE) as a small-scale scheme of pulse tube engine operating at atmospheric air. Unlike some previous works this standing wave configuration is a fully closed system. The improvements over previous mathematical models are the following: the model allows specifying any values of porosity for regenerator, takes into account the piston weight and the friction in the cylinder and piston unit, and determines the operating frequency. The numerical method is based on the relation equations between the pressure and volume velocity variables at the ends of each element of PTE which is recorded through the appropriate transformation matrix. A solution demonstrates that the PTE operation frequency is the complex value, and it depends on the piston mass and the dynamic friction due to its movement in the cylinder. On the basis of the determined frequency thermoacoustically induced heat transport and generation of acoustic power equations were solved for channel with temperature gradient on its ends. The results of numerical simulation demonstrate the features of the initialization process of oscillation and show that that generated acoustic power more than power on the steady mode in a factor of 3…4. But doesn`t mean the possibility of its further continuous utilizing due to its existence only in transient mode which lasts only for a 30-40 sec. The experiments were carried out on small-scale PTE. The results shows that the value of acoustic power is in the range of 0.7..1.05 W for the defined frequency range f = 13..18 Hz and pressure amplitudes 11..12 kPa. These experimental data are satisfactorily correlated with the numerical modeling results. The mathematical model can be straightforwardly applied for the thermoacoustic devices with variable temperatures of thermal reservoirs and variable transduction loads which are expected to occur in practical implementations of portable thermoacoustic engines.Keywords: nonlinear processes, pulse tube engine, thermal excitation, standing wave
Procedia PDF Downloads 376431 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 177430 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content
Authors: Moses Kolade Ogun, Ina Korner
Abstract:
To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content
Procedia PDF Downloads 182429 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor
Authors: Sourabh Jain, S. S. Jain
Abstract:
Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.Keywords: ITS strategies, congestion, planning, mobility, safety
Procedia PDF Downloads 179428 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 169427 Advancing Circular Economy Principles: Integrating AI Technology in Street Sanitation for Sustainable Urban Development
Authors: Xukai Fu
Abstract:
The concept of circular economy is interdisciplinary, intersecting environmental engineering, information technology, business, and social science domains. Over the course of its 15-year tenure in the sanitation industry, Jinkai has concentrated its efforts in the past five years on integrating artificial intelligence (AI) technology with street sanitation apparatus and systems. This endeavor has led to the development of various innovations, including the Intelligent Identification Sweeper Truck (Intelligent Waste Recognition and Energy-saving Control System), the Intelligent Identification Water Truck (Intelligent Flushing Control System), the intelligent food waste treatment machine, and the Intelligent City Road Sanitation Surveillance Platform. This study will commence with an examination of prevalent global challenges, elucidating how Jinkai effectively addresses each within the framework of circular economy principles. Utilizing a review and analysis of pertinent environmental management data, we will elucidate Jinkai's strategic approach. Following this, we will investigate how Jinkai utilizes the advantages of circular economy principles to guide the design of street sanitation machinery, with a focus on digitalization integration. Moreover, we will scrutinize Jinkai's sustainable practices throughout the invention and operation phases of street sanitation machinery, aligning with the triple bottom line theory. Finally, we will delve into the significance and enduring impact of corporate social responsibility (CSR) and environmental, social, and governance (ESG) initiatives. Special emphasis will be placed on Jinkai's contributions to community stakeholders, with a particular emphasis on human rights. Despite the widespread adoption of circular economy principles across various industries, achieving a harmonious equilibrium between environmental justice and social justice remains a formidable task. Jinkai acknowledges that the mere development of energy-saving technologies is insufficient for authentic circular economy implementation; rather, they serve as instrumental tools. To earnestly promote and embody circular economy principles, companies must consistently prioritize the UN Sustainable Development Goals and adapt their technologies to address the evolving exigencies of our world.Keywords: circular economy, core principles, benefits, the tripple bottom line, CSR, ESG, social justice, human rights, Jinkai
Procedia PDF Downloads 47426 Influence of Various Disaster Scenarios Assumption to the Advance Creation of Wide-Area Evacuation Plan Confronting Natural Disasters
Authors: Nemat Mohammadi, Yuki Nakayama
Abstract:
After occurring Great East Japan earthquake and as a consequence the invasion of an extremely large Tsunami to the city, obligated many local governments to take into account certainly these kinds of issues. Poor preparation of local governments to deal with such kinds of disasters at that time and consequently lack of assistance delivery for local residents caused thousands of civilian casualties as well as billion dollars of economic damages. Those local governments who are responsible for governing such coastal areas, have to consider some countermeasures to deal with these natural disasters, prepare a comprehensive evacuation plan and contrive some feasible emergency plans for the purpose of victims’ reduction as much as possible. Under this evacuation plan, the local government should contemplate more about the traffic congestion during wide-area evacuation operation and estimate the minimum essential time to evacuate the whole city completely. This challenge will become more complicated for the government when the people who are affected by disasters are not only limited to the normal informed citizens but also some pregnant women, physically handicapped persons, old age citizens and foreigners or tourists who are not familiar with that conditions as well as local language are involved. The important issue to deal with this challenge is that how to inform these people to take a proper action right away noticing the Tsunami is coming. After overcoming this problem, next significant challenge is even more considerable. Next challenge is to evacuate the whole residents in a short period of time from the threated area to the safer shelters. In fact, most of the citizens will use their own vehicles to evacuate to the designed shelters and some of them will use the shuttle buses which are provided by local governments. The problem will arise when all residents want to escape from the threated area simultaneously and consequently creating a traffic jam on evacuation routes which will cause to prolong the evacuation time. Hence, this research mostly aims to calculate the minimum essential time to evacuate each region inside the threated area and find the evacuation start point for each region separately. This result will help the local government to visualize the situations and conditions during disasters and assist them to reduce the possible traffic jam on evacuation routes and consequently suggesting a comprehensive wide-area evacuation plan during natural disasters.Keywords: BPR formula, disaster scenarios, evacuation completion time, wide-area evacuation
Procedia PDF Downloads 211425 FEM and Experimental Modal Analysis of Computer Mount
Authors: Vishwajit Ghatge, David Looper
Abstract:
Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration
Procedia PDF Downloads 321424 Investigating Effects of Vehicle Speed and Road PSDs on Response of a 35-Ton Heavy Commercial Vehicle (HCV) Using Mathematical Modelling
Authors: Amal G. Kurian
Abstract:
The use of mathematical modeling has seen a considerable boost in recent times with the development of many advanced algorithms and mathematical modeling capabilities. The advantages this method has over other methods are that they are much closer to standard physics theories and thus represent a better theoretical model. They take lesser solving time and have the ability to change various parameters for optimization, which is a big advantage, especially in automotive industry. This thesis work focuses on a thorough investigation of the effects of vehicle speed and road roughness on a heavy commercial vehicle ride and structural dynamic responses. Since commercial vehicles are kept in operation continuously for longer periods of time, it is important to study effects of various physical conditions on the vehicle and its user. For this purpose, various experimental as well as simulation methodologies, are adopted ranging from experimental transfer path analysis to various road scenario simulations. To effectively investigate and eliminate several causes of unwanted responses, an efficient and robust technique is needed. Carrying forward this motivation, the present work focuses on the development of a mathematical model of a 4-axle configuration heavy commercial vehicle (HCV) capable of calculating responses of the vehicle on different road PSD inputs and vehicle speeds. Outputs from the model will include response transfer functions and PSDs and wheel forces experienced. A MATLAB code will be developed to implement the objectives in a robust and flexible manner which can be exploited further in a study of responses due to various suspension parameters, loading conditions as well as vehicle dimensions. The thesis work resulted in quantifying the effect of various physical conditions on ride comfort of the vehicle. An increase in discomfort is seen with velocity increase; also the effect of road profiles has a considerable effect on comfort of the driver. Details of dominant modes at each frequency are analysed and mentioned in work. The reduction in ride height or deflection of tire and suspension with loading along with load on each axle is analysed and it is seen that the front axle supports a greater portion of vehicle weight while more of payload weight comes on fourth and third axles. The deflection of the vehicle is seen to be well inside acceptable limits.Keywords: mathematical modeling, HCV, suspension, ride analysis
Procedia PDF Downloads 258423 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis
Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio
Abstract:
Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction
Procedia PDF Downloads 309422 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business
Authors: Kritchakhris Na-Wattanaprasert
Abstract:
The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.Keywords: key performance indicator, warehouse management, warehouse operation, logistics management
Procedia PDF Downloads 431421 The Maps of Meaning (MoM) Consciousness Theory
Authors: Scott Andersen
Abstract:
Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.Keywords: consciousness, perception, prospection, embodiment
Procedia PDF Downloads 59420 Effective Apixaban Clearance with Cytosorb Extracorporeal Hemoadsorption
Authors: Klazina T. Havinga, Hilde R. H. de Geus
Abstract:
Introduction: Pre-operative coagulation management of Apixaban prescribed patients, a new oral anticoagulant (a factor Xa inhibitor), is difficult, especially when chronic kidney disease (CKD) causes drug overdose. Apixaban is not dialyzable due to its high level of protein binding. An antidote, Andexanet α, is available but expensive and has an unfavorable short half-life. We report the successful extracorporeal removal of Apixaban prior to emergency surgery with the CytoSorb® Hemoadsorption device. Methods: A 89-year-old woman with CKD, with an Apixaban prescription for atrial fibrillation, was presented at the ER with traumatic rib fractures, a flail chest, and an unstable spinal fracture (T12) for which emergency surgery was indicated. However, due to very high Apixaban levels, this surgery had to be postponed. Based on the Apixaban-specific anti-factor Xa activity (AFXaA) measurements at admission and 10 hours later, complete clearance was expected after 48 hours. In order to enhance the Apixaban removal and reduce the time to operation, and therefore reduce pulmonary complications, CRRT with CytoSorb® cartridge was initiated. Apixaban-specific anti-factor Xa activity (AFXaA) was measured frequently as a substitute for Apixaban drug concentrations, pre- and post adsorber, in order to calculate the adsorber-related clearance. Results: The admission AFXaA concentration, as a substitute for Apixaban drug levels, was 218 ng/ml, which decreased to 157 ng/ml after ten hours. Due to sustained anticoagulation effects, surgery was again postponed. However, the AFXaA levels decreased quickly to sub-therapeutic levels after CRRT (Multifiltrate Pro, Fresenius Medical Care, Blood flow 200 ml/min, Dialysate Flow 4000 ml/h, Prescribed renal dose 51 ml-kg-h) with Cytosorb® connected in series into the circuit was initiated (within 5 hours). The adsorber-related (indirect) Apixaban clearance was calculated every half hour (Cl=Qe * (AFXaA pre- AFXaA post/ AFXaA pre) with Qe=plasma flow rate calculated with Ht=0.38 and system blood flow rate 200 ml-min): 100 ml/min, 72 ml/min and 57 ml/min. Although, as expected, the adsorber-related clearance decreased quickly due to saturation of the beads, still the reduction rate achieved resulted in a very rapid decrease in AFXaA levels. Surgery was ordered and possible within 5 hours after Cytosorb initiation. Conclusion: The CytoSorb® Hemoadsorption device enabled rapid correction of Apixaban associated anticoagulation.Keywords: Apixaban, CytoSorb, emergency surgery, Hemoadsorption
Procedia PDF Downloads 156419 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study
Authors: Ndibarefinia Tobin
Abstract:
The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.Keywords: knowledge management, whole life costing, construction industry, knowledge
Procedia PDF Downloads 244418 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 294417 Analysis of Lift Force in Hydrodynamic Transport of a Finite Sized Particle in Inertial Microfluidics with a Rectangular Microchannel
Authors: Xinghui Wu, Chun Yang
Abstract:
Inertial microfluidics is a competitive fluidic method with applications in separation of particles, cells and bacteria. In contrast to traditional microfluidic devices with low Reynolds number, inertial microfluidics works in the intermediate Re number range which brings about several intriguing inertial effects on particle separation/focusing to meet the throughput requirement in the real-world. Geometric modifications to make channels become irregular shapes can leverage fluid inertia to create complex secondary flow for adjusting the particle equilibrium positions and thus enhance the separation resolution and throughput. Although inertial microfluidics has been extensively studied by experiments, our current understanding of its mechanisms is poor, making it extremely difficult to build rational-design guidelines for the particle focusing locations, especially for irregularly shaped microfluidic channels. Inertial particle microfluidics in irregularly shaped channels were investigated in our group. There are several fundamental issues that require us to address. One of them is about the balance between the inertial lift forces and the secondary drag forces. Also, it is critical to quantitatively describe the dependence of the life forces on particle-particle interactions in irregularly shaped channels, such as a rectangular one. To provide physical insights into the inertial microfluidics in channels of irregular shapes, in this work the immersed boundary-lattice Boltzmann method (IB-LBM) was introduced and validated to explore the transport characteristics and the underlying mechanisms of an inertial focusing single particle in a rectangular microchannel. The transport dynamics of a finitesized particle were investigated over wide ranges of Reynolds number (20 < Re < 500) and particle size. The results show that the inner equilibrium positions are more difficult to occur in the rectangular channel, which can be explained by the secondary flow caused by the presence of a finite-sized particle. Furthermore, force decoupling analysis was utilized to study the effect of each type of lift force on the inertia migration, and a theoretical model for the lateral lift force of a finite-sized particle in the rectangular channel was established. Such theoretical model can be used to provide theoretical guidance for the design and operation of inertial microfluidics.Keywords: inertial microfluidics, particle focuse, life force, IB-LBM
Procedia PDF Downloads 71416 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: input performance, mobile device, slim keyboard, tactile feedback
Procedia PDF Downloads 299415 Understanding the Effect of Material and Deformation Conditions on the “Wear Mode Diagram”: A Numerical Study
Authors: A. Mostaani, M. P. Pereira, B. F. Rolfe
Abstract:
The increasing application of Advanced High Strength Steel (AHSS) in the automotive industry to fulfill crash requirements has introduced higher levels of wear in stamping dies and parts. Therefore, understanding wear behaviour in sheet metal forming is of great importance as it can help to reduce the high costs currently associated with tool wear. At the contact between the die and the sheet, the tips of hard tool asperities interact with the softer sheet material. Understanding the deformation that occurs during this interaction is important for our overall understanding of the wear mechanisms. For these reasons, the scratching of a perfectly plastic material by a rigid indenter has been widely examined in the literature; with finite element modelling (FEM) used in recent years to further understand the behaviour. The ‘wear mode diagram’ has been commonly used to classify the deformation regime of the soft work-piece during scratching, into three modes: ploughing, wedge formation, and cutting. This diagram, which is based on 2D slip line theory and upper bound method for perfectly plastic work-piece and rigid indenter, relates different wear modes to attack angle and interfacial strength. This diagram has been the basis for many wear studies and wear models to date. Additionally, it has been concluded that galling is most likely to occur during the wedge formation mode. However, there has been little analysis in the literature of how the material behaviour and deformation conditions associated with metal forming processes influence the wear behaviour. Therefore, the first aim of this work is first to use a commercial FEM package (Abaqus/Explicit) to build a 3D model to capture wear modes during scratching with indenters with different attack angles and different interfacial strengths. The second goal is to utilise the developed model to understand how wear modes might change in the presence of bulk deformation of the work-piece material as a result of the metal forming operation. Finally, the effect of the work-piece material properties, including strain hardening, will be examined to understand how these influence the wear modes and wear behaviour. The results show that both strain hardening and substrate deformation can change the critical attack angle at which the wedge formation regime is activated.Keywords: finite element, pile-up, scratch test, wear mode
Procedia PDF Downloads 327414 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm
Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali
Abstract:
Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir
Procedia PDF Downloads 265413 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 110412 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints
Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno
Abstract:
Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.Keywords: battery energy storage, power system stability, system strength, weak power system
Procedia PDF Downloads 61411 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern
Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li
Abstract:
Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.
Procedia PDF Downloads 17410 CuIn₃Se₅ Colloidal Nanocrystals and Its Ink-Coated Films for Photovoltaics
Authors: M. Ghali, M. Elnimr, G. F. Ali, A. M. Eissa, H. Talaat
Abstract:
CuIn₃Se₅ material is indexed as ordered vacancy compounds having excellent matching properties with CuInGaSe (CIGS) solar absorber layer. For example, the valence band offset of CuIn₃Se₅ with CIGS is nearly 0.3 eV, and the lattice mismatch is less than 1%, besides the absence of discontinuity in their conduction bands. Thus, CuIn₃Se₅ can work as a passivation layer for repelling holes from CIGS/CdS interface and hence to reduce the interface carriers recombination and consequently enhancing the efficiency of CIGS/CdS solar cells. Theoretically, it was reported earlier that an improvement in the efficiency of p-CIGS-based solar cell with a thin ~100 nm of n-CuIn₃Se₅ layer is expected. Recently, a reported experiment demonstrated significant improvement in the efficiency of Molecular Beam Epitaxy (MBE) grown CIGS solar cells from 13.4 to 14.5% via inserting a thin layer of MBE-grown Cu(In,Ga)₃Se₅ layer at the CdS/CIGS interface. It should be mentioned that CuIn₃Se₅ material in either bulk or thin film form, are usually fabricated by high vacuum physical vapor deposition techniques (e.g., three-source co-evaporation, RF sputtering, flash evaporation, and molecular beam epitaxy). In addition, achieving photosensitive films of n-CuIn₃Se₅ material is important for new hybrid organic/inorganic structures, where inorganic photo-absorber layer, with n-type conductivity, can form n–p junction with organic p-type material (e.g., conductive polymers). A detailed study of the physical properties of CuIn₃Se₅ is still necessary for better understanding of device operation and further improvement of solar cells performance. Here, we report on the low-cost synthesis of CuIn₃Se₅ material in nano-scale size, with an average diameter ~10nm, using simple solution-based colloidal chemistry. In contrast to traditionally grown bulk tetragonal CuIn₃Se₅ crystals using high Vacuum-based technology, our colloidal CuIn₃Se₅ nanocrystals show cubic crystal structure with a shape of nanoparticles and band gap ~1.33 eV. Ink-coated thin films prepared from these nanocrystals colloids; display n-type character, 1.26 eV band gap and strong photo-responsive behavior with incident white light. This suggests the potential use of colloidal CuIn₃Se₅ as an active layer in all-solution-processed thin film solar cells.Keywords: nanocrystals, CuInSe, thin film, optical properties
Procedia PDF Downloads 155409 Effect of Downstream Pressure in Tuning the Flow Control Orifices of Pressure Fed Reaction Control System Thrusters
Authors: Prakash M.N, Mahesh G, Muhammed Rafi K.M, Shiju P. Nair
Abstract:
Introduction: In launch vehicle missions, Reaction Control thrusters are being used for the three-axis stabilization of the vehicle during the coasting phases. A pressure-fed propulsion system is used for the operation of these thrusters due to its less complexity. In liquid stages, these thrusters are designed to draw propellant from the same tank used for the main propulsion system. So in order to regulate the propellant flow rates of these thrusters, flow control orifices are used in feed lines. These orifices are calibrated separately as per the flow rate requirement of individual thrusters for the nominal operating conditions. In some missions, it was observed that the thrusters were operated at higher thrust than nominal. This point was addressed through a series of cold flow and hot tests carried out in-ground and this paper elaborates the details of the same. Discussion: In order to find out the exact reason for this phenomenon, two flight configuration thrusters were identified and hot tested in the ground with calibrated orifices and feed lines. During these tests, the chamber pressure, which is directly proportional to the thrust, is measured. In both cases, chamber pressures higher than the nominal by 0.32bar to 0.7bar were recorded. The increase in chamber pressure is due to an increase in the oxidizer flow rate of both the thrusters. Upon further investigation, it is observed that the calibration of the feed line is done with ambient pressure downstream. But in actual flight conditions, the orifices will be subjected to operate with 10 to 11bar pressure downstream. Due to this higher downstream pressure, the flow through the orifices increases and thereby, the thrusters operate with higher chamber pressure values. Conclusion: As part of further investigatory tests, two numbers of fresh thrusters were realized. Orifice tuning of these thrusters was carried out in three different ways. In the first trial, the orifice tuning was done by simulating 1bar pressure downstream. The second trial was done with the injector assembled downstream. In the third trial, the downstream pressure equal to the flight injection pressure was simulated downstream. Using these calibrated orifices, hot tests were carried out in simulated vacuum conditions. Chamber pressure and flow rate values were exactly matching with the prediction for the second and third trials. But for the first trial, the chamber pressure values obtained in the hot test were more than the prediction. This clearly shows that the flow is detached in the 1st trial and attached for the 2nd & 3rd trials. Hence, the error in tuning the flow control orifices is pinpointed as the reason for this higher chamber pressure observed in flight.Keywords: reaction control thruster, propellent, orifice, chamber pressure
Procedia PDF Downloads 201408 Proposed Design of an Optimized Transient Cavity Picosecond Ultraviolet Laser
Authors: Marilou Cadatal-Raduban, Minh Hong Pham, Duong Van Pham, Tu Nguyen Xuan, Mui Viet Luong, Kohei Yamanoi, Toshihiko Shimizu, Nobuhiko Sarukura, Hung Dai Nguyen
Abstract:
There is a great deal of interest in developing all-solid-state tunable ultrashort pulsed lasers emitting in the ultraviolet (UV) region for applications such as micromachining, investigation of charge carrier relaxation in conductors, and probing of ultrafast chemical processes. However, direct short-pulse generation is not as straight forward in solid-state gain media as it is for near-IR tunable solid-state lasers such as Ti:sapphire due to the difficulty of obtaining continuous wave laser operation, which is required for Kerr lens mode-locking schemes utilizing spatial or temporal Kerr type nonlinearity. In this work, the transient cavity method, which was reported to generate ultrashort laser pulses in dye lasers, is extended to a solid-state gain medium. Ce:LiCAF was chosen among the rare-earth-doped fluoride laser crystals emitting in the UV region because of its broad tunability (from 280 to 325 nm) and enough bandwidth to generate 3-fs pulses, sufficiently large effective gain cross section (6.0 x10⁻¹⁸ cm²) favorable for oscillators, and a high saturation fluence (115 mJ/cm²). Numerical simulations are performed to investigate the spectro-temporal evolution of the broadband UV laser emission from Ce:LiCAF, represented as a system of two homogeneous broadened singlet states, by solving the rate equations extended to multiple wavelengths. The goal is to find the appropriate cavity length and Q-factor to achieve the optimal photon cavity decay time and pumping energy for resonator transients that will lead to ps UV laser emission from a Ce:LiCAF crystal pumped by the fourth harmonics (266nm) of a Nd:YAG laser. Results show that a single ps pulse can be generated from a 1-mm, 1 mol% Ce³⁺-doped LiCAF crystal using an output coupler with 10% reflectivity (low-Q) and an oscillator cavity that is 2-mm long (short cavity). This technique can be extended to other fluoride-based solid-state laser gain media.Keywords: rare-earth-doped fluoride gain medium, transient cavity, ultrashort laser, ultraviolet laser
Procedia PDF Downloads 357407 A Rare Cause of Abdominal Pain Post Caesarean Section
Authors: Madeleine Cox
Abstract:
Objective: discussion of diagnosis of vernix caseosa peritonitis, recovery and subsequent caesarean seciton Case: 30 year old G4P1 presented in labour at 40 weeks, planning a vaginal birth afterprevious caesarean section. She underwent an emergency caesarean section due to concerns for fetal wellbeing on CTG. She was found to have a thin lower segment with a very small area of dehiscence centrally. The operation was uncomplicated, and she recovered and went home 2 days later. She then represented to the emergency department day 6 post partum feeling very unwell, with significant abdominal pain, tachycardia as well as urinary retention. Raised white cell count of 13.7 with neutrophils of 11.64, CRP of 153. An abdominal ultrasound was poorly tolerated by the patient and did not aide in the diagnosis. Chest and abdominal xray were normal. She underwent a CT chest and abdomen, which found a small volume of free fluid with no apparent collection. Given no obvious cause of her symptoms were found and the patient did not improve, she had a repeat CT 2 days later, which showed progression of free fluid. A diagnostic laparoscopy was performed with general surgeons, which reveled turbid fluid, an inflamed appendix which was removed. The patient improved remarkably post operatively. The histology showed periappendicitis with acute appendicitis with marked serosal inflammatory reaction to vernix caseosa. Following this, the patient went on to recover well. 4 years later, the patient was booked for an elective caesarean section, on entry into the abdomen, there were very minimal adhesions, and the surgery and her subsequent recovery was uncomplicated. Discussion: this case represents the diagnostic dilemma of a patient who presents unwell without a clear cause. In this circumstance, multiple modes of imaging did not aide in her diagnosis, and so she underwent diagnostic surgery. It is important to evaluate if a patient is or is not responding to the typical causes of post operative pain and adjust management accordingly. A multiteam approach can help to provide a diagnosis for these patients. Conclusion: Vernix caseosa peritonitis is a rare cause of acute abdomen post partum. There are few reports in the literature of the initial presentation and no reports on the possible effects on future pregnancies. This patient did not have any complications in her following pregnancy or delivery secondary to her diagnosis of vernix caseosa peritonitis. This may assist in counselling other women who have had this uncommon diagnosis.Keywords: peritonitis, obstetrics, caesarean section, pain
Procedia PDF Downloads 104406 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 86405 Ethiopian Textile and Apparel Industry: Study of the Information Technology Effects in the Sector to Improve Their Integrity Performance
Authors: Merertu Wakuma Rundassa
Abstract:
Global competition and rapidly changing customer requirements are forcing major changes in the production styles and configuration of manufacturing organizations. Increasingly, traditional centralized and sequential manufacturing planning, scheduling, and control mechanisms are being found insufficiently flexible to respond to changing production styles and highly dynamic variations in product requirements. The traditional approaches limit the expandability and reconfiguration capabilities of the manufacturing systems. Thus many business houses face increasing pressure to lower production cost, improve production quality and increase responsiveness to customers. In a textile and apparel manufacturing, globalization has led to increase in competition and quality awareness and these industries have changed tremendously in the last few years. So, to sustain competitive advantage, companies must re-examine and fine-tune their business processes to deliver high quality goods at very low costs and it has become very important for the textile and apparel industries to integrate themselves with information technology to survive. IT can create competitive advantages for companies to improve coordination and communication among trading partners, increase the availability of information for intermediaries and customers and provide added value at various stages along the entire chain. Ethiopia is in the process of realizing its potential as the future sourcing location for the global textile and garments industry. With a population of over 90 million people and the fastest growing non-oil economy in Africa, Ethiopia today represents limitless opportunities for international investors. For the textile and garments industry Ethiopia promises a low cost production location with natural resources such as cotton to enable the setup of vertically integrated textile and garment operation. However; due to lack of integration of their business activities textile and apparel industry of Ethiopia faced a problem in that it can‘t be competent in the global market. On the other hand the textile and apparel industries of other countries have changed tremendously in the last few years and globalization has led to increase in competition and quality awareness. So the aim of this paper is to study the trend of Ethiopian Textile and Apparel Industry on the application of different IT system to integrate them in the global market.Keywords: information technology, business integrity, textile and apparel industries, Ethiopia
Procedia PDF Downloads 362404 Fuzzy Expert Approach for Risk Mitigation on Functional Urban Areas Affected by Anthropogenic Ground Movements
Authors: Agnieszka A. Malinowska, R. Hejmanowski
Abstract:
A number of European cities are strongly affected by ground movements caused by anthropogenic activities or post-anthropogenic metamorphosis. Those are mainly water pumping, current mining operation, the collapse of post-mining underground voids or mining-induced earthquakes. These activities lead to large and small-scale ground displacements and a ground ruptures. The ground movements occurring in urban areas could considerably affect stability and safety of structures and infrastructures. The complexity of the ground deformation phenomenon in relation to the structures and infrastructures vulnerability leads to considerable constraints in assessing the threat of those objects. However, the increase of access to the free software and satellite data could pave the way for developing new methods and strategies for environmental risk mitigation and management. Open source geographical information systems (OS GIS), may support data integration, management, and risk analysis. Lately, developed methods based on fuzzy logic and experts methods for buildings and infrastructure damage risk assessment could be integrated into OS GIS. Those methods were verified base on back analysis proving their accuracy. Moreover, those methods could be supported by ground displacement observation. Based on freely available data from European Space Agency and free software, ground deformation could be estimated. The main innovation presented in the paper is the application of open source software (OS GIS) for integration developed models and assessment of the threat of urban areas. Those approaches will be reinforced by analysis of ground movement based on free satellite data. Those data would support the verification of ground movement prediction models. Moreover, satellite data will enable our mapping of ground deformation in urbanized areas. Developed models and methods have been implemented in one of the urban areas hazarded by underground mining activity. Vulnerability maps supported by satellite ground movement observation would mitigate the hazards of land displacements in urban areas close to mines.Keywords: fuzzy logic, open source geographic information science (OS GIS), risk assessment on urbanized areas, satellite interferometry (InSAR)
Procedia PDF Downloads 159403 Wind Generator Control in Isolated Site
Authors: Glaoui Hachemi
Abstract:
Wind has been proven as a cost effective and reliable energy source. Technological advancements over the last years have placed wind energy in a firm position to compete with conventional power generation technologies. Algeria has a vast uninhabited land area where the south (desert) represents the greatest part with considerable wind regime. In this paper, an analysis of wind energy utilization as a viable energy substitute in six selected sites widely distributed all over the south of Algeria is presented. In this presentation, wind speed frequency distributions data obtained from the Algerian Meteorological Office are used to calculate the average wind speed and the available wind power. The annual energy produced by the Fuhrlander FL 30 wind machine is obtained using two methods. The analysis shows that in the southern Algeria, at 10 m height, the available wind power was found to vary between 160 and 280 W/m2, except for Tamanrasset. The highest potential wind power was found at Adrar, with 88 % of the time the wind speed is above 3 m/s. Besides, it is found that the annual wind energy generated by that machine lie between 33 and 61 MWh, except for Tamanrasset, with only 17 MWh. Since the wind turbines are usually installed at a height greater than 10 m, an increased output of wind energy can be expected. However, the wind resource appears to be suitable for power production on the south and it could provide a viable substitute to diesel oil for irrigation pumps and electricity generation. In this paper, a model of the wind turbine (WT) with permanent magnet generator (PMSG) and its associated controllers is presented. The increase of wind power penetration in power systems has meant that conventional power plants are gradually being replaced by wind farms. In fact, today wind farms are required to actively participate in power system operation in the same way as conventional power plants. In fact, power system operators have revised the grid connection requirements for wind turbines and wind farms, and now demand that these installations be able to carry out more or less the same control tasks as conventional power plants. For dynamic power system simulations, the PMSG wind turbine model includes an aerodynamic rotor model, a lumped mass representation of the drive train system and generator model. In this paper, we propose a model with an implementation in MATLAB / Simulink, each of the system components off-grid small wind turbines.Keywords: windgenerator systems, permanent magnet synchronous generator (PMSG), wind turbine (WT) modeling, MATLAB simulink environment
Procedia PDF Downloads 337