Search results for: time delayed SVIRS epidemic model
29895 A Simple Fluid Dynamic Model for Slippery Pulse Pattern in Traditional Chinese Pulse Diagnosis
Authors: Yifang Gong
Abstract:
Pulse diagnosis is one of the most important diagnosis methods in traditional Chinese medicine. It is also the trickiest method to learn. It is known as that it can only to be sensed not explained. This becomes a serious threat to the survival of this diagnostic method. However, there are a large amount of experiences accumulated during the several thousand years of practice of Chinese doctors. A pulse pattern called 'Slippery pulse' is one of the indications of pregnancy. A simple fluid dynamic model is proposed to simulate the effects of the existence of a placenta. The placenta is modeled as an extra plenum in an extremely simplified fluid network model. It is found that because of the existence of the extra plenum, indeed the pulse pattern shows a secondary peak in one pulse period. As for the author’s knowledge, this work is the first time to show the link between Pulse diagnoses and basic physical principle. Key parameters which might affect the pattern are also investigated.Keywords: Chinese medicine, flow network, pregnancy, pulse
Procedia PDF Downloads 38129894 Time to Cure from Obstetric Fistula and Its Associated Factors among Women Admitted to Addis Ababa Hamlin Fistula Hospital, Addis Ababa Ethiopia: A Survival Analysis
Authors: Chernet Mulugeta, Girma Seyoum, Yeshineh Demrew, Kehabtimer Shiferaw
Abstract:
Background: Obstetric fistula (OF) is a serious medical condition that includes an abnormal opening between the vagina and bladder (vesico-vaginal fistula) or the vagina and rectum (recto-vaginal fistula). It is usually caused by prolonged obstructed labour. Despite its serious health and psychosocial consequences, there is a paucity of evidence regarding the time it takes to heal from OF. Objective: The aim of this study was to assess the time to cure from obstetric fistula and its predictors among women admitted to Addis Ababa Hamlin Fistula Hospital, Addis Ababa, Ethiopia. Methodology: An institution-based retrospective cohort study was conducted from January 2015 to December 2020 among a randomly selected 434 women with OF in Addis Ababa Hamlin Fistula Hospital. Data was collected using a structured checklist adapted from a similar study. The open data kit (ODK) collected data was exported and analyzed by using STATA (14.2). Kaplan Meir was used to compare the recovery time from OF. To identify the predictors of OF, a Cox regression model was fitted, and an adjusted hazard ratio with a 95% confidence interval was used to estimate the strength of the associations. Results: The average time to recover from obstetric fistula was 3.95 (95% CI: 3.0-4.6) weeks. About ¾ of the women [72.8% (95% CI - 0.65-1.2)] were physically cured of obstetric fistula. Having secondary education and above [AHR=3.52; 95% CI (1.98, 6.25)] compared to no formal education, having a live birth [AHR=1.64; 95% CI (1.22, 2.21)], having an intact bladder [AHR=2.47; 95% CI (1.1, 5.54)] compared to totally destructed, and having a grade 1 fistula [AHR=1.98; 95% CI (1.19, 3.31)] compared to grade 3 were the significant predictors of shorter time to cure from an obstetric fistula. Conclusion and recommendation: Overall, the proportion of women with OF who were not being cured was unacceptably high. The time it takes for them to recover from the fistula was also extended. It connotes us to work on the identified predictors to improve the time to recovery from OF.Keywords: time to recovery, obstetric fistula, predictors, Ethiopia
Procedia PDF Downloads 8629893 General Time-Dependent Sequenced Route Queries in Road Networks
Authors: Mohammad Hossein Ahmadi, Vahid Haghighatdoost
Abstract:
Spatial databases have been an active area of research over years. In this paper, we study how to answer the General Time-Dependent Sequenced Route queries. Given the origin and destination of a user over a time-dependent road network graph, an ordered list of categories of interests and a departure time interval, our goal is to find the minimum travel time path along with the best departure time that minimizes the total travel time from the source location to the given destination passing through a sequence of points of interests belonging to each of the specified categories of interest. The challenge of this problem is the added complexity to the optimal sequenced route queries, where we assume that first the road network is time dependent, and secondly the user defines a departure time interval instead of one single departure time instance. For processing general time-dependent sequenced route queries, we propose two solutions as Discrete-Time and Continuous-Time Sequenced Route approaches, finding approximate and exact solutions, respectively. Our proposed approaches traverse the road network based on A*-search paradigm equipped with an efficient heuristic function, for shrinking the search space. Extensive experiments are conducted to verify the efficiency of our proposed approaches.Keywords: trip planning, time dependent, sequenced route query, road networks
Procedia PDF Downloads 32129892 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 2929891 Review Paper on an Algorithm Enhancing Privacy and Security in Online Meeting Platforms Using a Secured Encryption
Authors: Tonderai Muchenje, Mkhatshwa Phethile
Abstract:
Humans living in this current situation know that communication with one another is necessary for themselves. There are many ways to communicate with each other; during unexpected natural disasters and outbreak of epidemics and pandemics, the need for online meeting platforms are considered most important. Apparently, the development in the telecommunication sector also played an important role. Therefore, the epidemic of the Covid-19 Pandemic and the new normal situation resulted in the overwhelming production of online meeting platforms to prevent the situation. This software is commonly used in business communications in the beginning. Rapidly the COVID-19 pandemic changed the situation. At present-day, these virtual meeting applications are not only used to have informal meetings with friends and relatives but also to be used to have formal meetings in the business and education (universities) sector. In this article, an attempt has been made to list out the useful secured ways for using online meeting platforms.Keywords: virtual background, zoom, secure online algorithm, RingCentral, Pexip Pexip, TeamViewer, microsoft teams
Procedia PDF Downloads 11329890 Model of Multi-Criteria Evaluation for Railway Lines
Authors: Juraj Camaj, Martin Kendra, Jaroslav Masek
Abstract:
The paper is focused to the evaluation railway tracks in the Slovakia by using Multi-Criteria method. Evaluation of railway tracks has important impacts for the assessment of investment in technical equipment. Evaluation of railway tracks also has an important impact for the allocation of marshalling yards. Marshalling yards are in transport model as centers for the operation assigned catchment area. This model is one of the effective ways to meet the development strategy of the European Community's railways. By applying this model in practice, a transport company can guarantee a higher quality of service and then expect an increase in performance. The model is also applicable to other rail networks. This model supplements a theoretical problem of train formation problem of new ways of looking at evaluation of factors affecting the organization of wagon flows.Keywords: railway track, multi-criteria methods, evaluation, transportation model
Procedia PDF Downloads 46729889 Numerical Modeling of Waves and Currents by Using a Hydro-Sedimentary Model
Authors: Mustapha Kamel Mihoubi, Hocine Dahmani
Abstract:
Over recent years much progress has been achieved in the fields of numerical modeling shoreline processes: waves, currents, waves and current. However, there are still some problems in the existing models to link the on the first, the hydrodynamics of waves and currents and secondly, the sediment transport processes and due to the variability in time, space and interaction and the simultaneous action of wave-current near the shore. This paper is the establishment of a numerical modeling to forecast the sediment transport from development scenarios of harbor structure. It is established on the basis of a numerical simulation of a water-sediment model via a 2D model using a set of codes calculation MIKE 21-DHI software. This is to examine the effect of the sediment transport drivers following the dominant incident wave in the direction to pass input harbor work under different variants planning studies to find the technical and economic limitations to the sediment transport and protection of the harbor structure optimum solution.Keywords: swell, current, radiation, stress, mesh, mike21, sediment
Procedia PDF Downloads 46729888 Research on Coordination Strategies for Coordinating Supply Chain Based on Auction Mechanisms
Authors: Changtong Wang, Lingyun Wei
Abstract:
The combination of auctions and supply chains is of great significance in improving the supply chain management system and enhancing the efficiency of economic and social operations. To address the gap in research on supply chain strategies under the auction mechanism, a model is developed for the 1-N auction model in a complete information environment, and it is concluded that the two-part contract auction model for retailers in this model can achieve supply chain coordination. The model is validated by substituting the model into the scenario of a fresh-cut flower industry flower auction in exchange for arithmetic examples to further prove the validity of the conclusions.Keywords: auction mechanism, supply chain coordination strategy, fresh cut flowers industry, supply chain management
Procedia PDF Downloads 12029887 The Promotion Effects for a Supply Chain System with a Dominant Retailer
Authors: Tai-Yue Wang, Yi-Ho Chen
Abstract:
In this study, we investigate a two-echelon supply chain with two suppliers and three retailers among which one retailer dominates other retailers. A price competition demand function is used to model this dominant retailer, which is leading market. The promotion strategies and negotiation schemes are integrated to form decision-making models under different scenarios. These models are then formulated into different mathematical programming models. The decision variables such as promotional costs, retailer prices, wholesale price, and order quantity are included in these models. At last, the distributions of promotion costs under different cost allocation strategies are discussed. Finally, an empirical example used to validate our models. The results from this empirical example show that the profit model will create the largest profit for the supply chain but with different profit-sharing results. At the same time, the more risk a member can take, the more profits are distributed to that member in the utility model.Keywords: supply chain, price promotion, mathematical models, dominant retailer
Procedia PDF Downloads 39929886 Adaptive Thermal Comfort Model for Air-Conditioned Lecture Halls in Malaysia
Authors: B. T. Chew, S. N. Kazi, A. Amiri
Abstract:
This paper presents an adaptive thermal comfort model study in the tropical country of Malaysia. A number of researchers have been interested in applying the adaptive thermal comfort model to different climates throughout the world, but so far no study has been performed in Malaysia. For the use as a thermal comfort model, which better applies to hot and humid climates, the adaptive thermal comfort model was developed as part of this research by using the collected results from a large field study in six lecture halls with 178 students. The relationship between the operative temperature and behavioral adaptations was determined. In the developed adaptive model, the acceptable indoor neutral temperatures lay within the range of 23.9-26.0 oC, with outdoor temperatures ranging between 27.0–34.6oC. The most comfortable temperature for students in the lecture hall was 25.7 oC.Keywords: hot and humid, lecture halls, neutral temperature, adaptive thermal comfort model
Procedia PDF Downloads 36629885 The Systems Theoretic Accident Model and Process (Stamp) as the New Trend to Promote Safety Culture in Construction
Authors: Natalia Ortega
Abstract:
Safety Culture (SCU) involves various perceptual, psychological, behavioral, and managerial factors. It has been shown that creating and maintaining an SCU is one way to reduce and prevent accidents and fatalities. In the construction sector, safety attitude, knowledge, and a supportive environment are predictors of safety behavior. The highest possible proportion of safety behavior among employees can be achieved by improving their safety attitude and knowledge. Accordingly, top management's commitment to safety is vital in shaping employees' safety attitude; therefore, the first step to improving employees' safety attitude is the genuine commitment of top management to safety. One of the factors affecting the successful implementation of health and safety promotion programs is the construction industry's subcontracting model. The contractual model's complexity, combined with the need for coordination among diverse stakeholders, makes it challenging to implement, manage, and follow up on health and well-being initiatives. The Systems theoretic accident model and process (STAMP) concept has expanded global consideration in recent years, increasing research attention. STAMP focuses attention on the role of constraints in safety management. The findings discover a growth of the research field from the definition in 2004 by Leveson and is being used across multiple domains. A systematic literature review of this novel model aims to meet the safety goals for human space exploration with a powerful and different approach to safety management, safety-driven design, and decision-making. Around two hundred studies have been published about applying the model. However, every single model for safety requires time to transform into research and practice, be tested and debated, and grow further and mature.Keywords: stamp, risk management, accident prevention, safety culture, systems thinking, construction industry, safety
Procedia PDF Downloads 7829884 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 33929883 Business Continuity Risk Review for a Large Petrochemical Complex
Authors: Michel A. Thomet
Abstract:
A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF
Procedia PDF Downloads 21729882 Method of Parameter Calibration for Error Term in Stochastic User Equilibrium Traffic Assignment Model
Authors: Xiang Zhang, David Rey, S. Travis Waller
Abstract:
Stochastic User Equilibrium (SUE) model is a widely used traffic assignment model in transportation planning, which is regarded more advanced than Deterministic User Equilibrium (DUE) model. However, a problem exists that the performance of the SUE model depends on its error term parameter. The objective of this paper is to propose a systematic method of determining the appropriate error term parameter value for the SUE model. First, the significance of the parameter is explored through a numerical example. Second, the parameter calibration method is developed based on the Logit-based route choice model. The calibration process is realized through multiple nonlinear regression, using sequential quadratic programming combined with least square method. Finally, case analysis is conducted to demonstrate the application of the calibration process and validate the better performance of the SUE model calibrated by the proposed method compared to the SUE models under other parameter values and the DUE model.Keywords: parameter calibration, sequential quadratic programming, stochastic user equilibrium, traffic assignment, transportation planning
Procedia PDF Downloads 29829881 A Novel PWM/PFM Controller for PSR Fly-Back Converter Using a New Peak Sensing Technique
Authors: Sanguk Nam, Van Ha Nguyen, Hanjung Song
Abstract:
For low-power applications such as adapters for portable devices and USB chargers, the primary side regulation (PSR) fly-back converter is widely used in lieu of the conventional fly-back converter using opto-coupler because of its simpler structure and lower cost. In the literature, there has been studies focusing on the design of PSR circuit; however, the conventional sensing method in PSR circuit using RC delay has a lower accuracy as compared to the conventional fly-back converter using opto-coupler. In this paper, we propose a novel PWM/PFM controller using new sensing technique for the PSR fly-back converter which can control an accurate output voltage. The conventional PSR circuit can sense the output voltage information from the auxiliary winding to regulate the duty cycle of the clock that control the output voltage. In the sensing signal waveform, there has two transient points at time the voltage equals to Vout+VD and Vout, respectively. In other to sense the output voltage, the PSR circuit must detect the time at which the current of the diode at the output equals to zero. In the conventional PSR flyback-converter, the sensing signal at this time has a non-sharp-negative slope that might cause a difficulty in detecting the output voltage information since a delay of sensing signal or switching clock may exist which brings out an unstable operation of PSR fly-back converter. In this paper instead of detecting output voltage at a non-sharp-negative slope, a sharp-positive slope is used to sense the proper information of the output voltage. The proposed PRS circuit consists of a saw-tooth generator, a summing circuit, a sample and hold circuit and a peak detector. Besides, there is also the start-up circuit which protects the chip from high surge current when the converter is turned on. Additionally, to reduce the standby power loss, a second mode which operates in a low frequency is designed beside the main mode at high frequency. In general, the operation of the proposed PSR circuit can be summarized as following: At the time the output information is sensed from the auxiliary winding, a saw-tooth signal from the saw-tooth generator is generated. Then, both of these signals are summed using a summing circuit. After this process, the slope of the peak of the sensing signal at the time diode current is zero becomes positive and sharp that make the peak easy to detect. The output of the summing circuit then is fed into a peak detector and the sample and hold circuit; hence, the output voltage can be properly sensed. By this way, we can sense more accurate output voltage information and extend margin even circuit is delayed or even there is the existence of noise by using only a simple circuit structure as compared with conventional circuits while the performance can be sufficiently enhanced. Circuit verification was carried out using 0.35μm 700V Magnachip process. The simulation result of sensing signal shows a maximum error of 5mV under various load and line conditions which means the operation of the converter is stable. As compared to the conventional circuit, we achieved very small error only used analog circuits compare with conventional circuits. In this paper, a PWM/PFM controller using a simple and effective sensing method for PSR fly-back converter has been presented in this paper. The circuit structure is simple as compared with the conventional designs. The gained results from simulation confirmed the idea of the designKeywords: primary side regulation, PSR, sensing technique, peak detector, PWM/PFM control, fly-back converter
Procedia PDF Downloads 33729880 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 11029879 Model-Based Control for Piezoelectric-Actuated Systems Using Inverse Prandtl-Ishlinskii Model and Particle Swarm Optimization
Authors: Jin-Wei Liang, Hung-Yi Chen, Lung Lin
Abstract:
In this paper feedforward controller is designed to eliminate nonlinear hysteresis behaviors of a piezoelectric stack actuator (PSA) driven system. The control design is based on inverse Prandtl-Ishlinskii (P-I) hysteresis model identified using particle swarm optimization (PSO) technique. Based on the identified P-I model, both the inverse P-I hysteresis model and feedforward controller can be determined. Experimental results obtained using the inverse P-I feedforward control are compared with their counterparts using hysteresis estimates obtained from the identified Bouc-Wen model. Effectiveness of the proposed feedforward control scheme is demonstrated. To improve control performance feedback compensation using traditional PID scheme is adopted to integrate with the feedforward controller.Keywords: the Bouc-Wen hysteresis model, particle swarm optimization, Prandtl-Ishlinskii model, automation engineering
Procedia PDF Downloads 51329878 The Morphological Changes of POV in Diabetic Patients and Its Correlation with Changes in Corneal Epithelium, Corneal Nerve, and the Fundus in Using Vivo Confocal Microscopy
Authors: Ji Jiazheng, Wang Jingrao, Jin Xin, Zhang Hong
Abstract:
Diabetes mellitus is a metabolic disease characterized by high blood sugar. A long-standing hyperglycemic state can lead to various tissue damage. Diabetic retinopathy is the most common and widely studied ocular complication and has become the leading cause of blindness in my country. At the same time, diabetes has profound clinically relevant effects on the cornea, leading to keratopathy and vision-threatening. The cornea is an avascular tissue and is sensitive to hyperglycemia, Keratopathy caused by diabetes is usually chronic, they are called diabetic keratopathy or diabetic neurotrophic keratopathy, leading to several diabetic corneal complications including delayed epithelial wound healing, recurrent erosions, neuropathy, loss of sensitivity. Corneal stem cell dysfunction in diabetic patients as an important influencing factor of diabetic keratopathy. The consequences of this condition are often underestimated. The limbus is located between the cornea and the sclera tissue. The limbal stroma consists of a series of radial elevations with fibrovascular centers known as palisades of Vogt (POV). Previous studies have shown that palisades of Vogt (POV), as the main site of limbal stem cells, plays an important role in the homeostasis of the corneal epithelium. Therefore, POV plays a vital role in the healing of corneal epithelial surgery and postoperative evaluation. IVCM can observe the condition of the corneal epithelium at the cellular level. It has profound significance and guidance for the evaluation of limbal and limbal stem cells. We have previously observed structural changes in POV in HSK and HZO patients on IVCM. At present, there have been reports involving limbal stem cell dysfunction in diabetic patients, but the specific pathogenesis is still unclear. However, there are no studies on POV morphological changes in patients with DM. Therefore, we performed statistics and compared the correlation between POV morphological changes and corneal epithelial basal cell density, corneal nerves, and length of disease in DM patients and normal humans using IVCM studies. At the same time, fundoscopy was used to observe the correlation between the thickness of RNFL and the thickness of GCC and POV in diabetic patients. And to observe the correlation between SVD, DVD and POV for research.Keywords: confocal microscopy, fundus, limbal stem cells, diabetes
Procedia PDF Downloads 8129877 Yang-Lee Edge Singularity of the Infinite-Range Ising Model
Authors: Seung-Yeon Kim
Abstract:
The Ising model, consisting magnetic spins, is the simplest system showing phase transitions and critical phenomena at finite temperatures. The Ising model has played a central role in our understanding of phase transitions and critical phenomena. Also, the Ising model explains the gas-liquid phase transitions accurately. However, the Ising model in a nonzero magnetic field has been one of the most intriguing and outstanding unsolved problems. We study analytically the partition function zeros in the complex magnetic-field plane and the Yang-Lee edge singularity of the infinite-range Ising model in an external magnetic field. In addition, we compare the Yang-Lee edge singularity of the infinite-range Ising model with that of the square-lattice Ising model in an external magnetic field.Keywords: Ising ferromagnet, magnetic field, partition function zeros, Yang-Lee edge singularity
Procedia PDF Downloads 73629876 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland
Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski
Abstract:
Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics
Procedia PDF Downloads 40029875 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria
Procedia PDF Downloads 26329874 Diagnostic Delays and Treatment Dilemmas: A Case of Drug-Resistant HIV and Tuberculosis
Authors: Christi Jackson, Chuka Onaga
Abstract:
Introduction: We report a case of delayed diagnosis of extra-pulmonary INH-mono-resistant Tuberculosis (TB) in a South African patient with drug-resistant HIV. Case Presentation: A 36-year old male was initiated on 1st line (NNRTI-based) anti-retroviral therapy (ART) in September 2009 and switched to 2nd line (PI-based) ART in 2011, according to local guidelines. He was following up at the outpatient wellness unit of a public hospital, where he was diagnosed with Protease Inhibitor resistant HIV in March 2016. He had an HIV viral load (HIVVL) of 737000 copies/mL, CD4-count of 10 cells/µL and presented with complaints of productive cough, weight loss, chronic diarrhoea and a septic buttock wound. Several investigations were done on sputum, stool and pus samples but all were negative for TB. The patient was treated with antibiotics and the cough and the buttock wound improved. He was subsequently started on a 3rd-line ART regimen of Darunavir, Ritonavir, Etravirine, Raltegravir, Tenofovir and Emtricitabine in May 2016. He continued losing weight, became too weak to stand unsupported and started complaining of abdominal pain. Further investigations were done in September 2016, including a urine specimen for Line Probe Assay (LPA), which showed M. tuberculosis sensitive to Rifampicin but resistant to INH. A lymph node biopsy also showed histological confirmation of TB. Management and outcome: He was started on Rifabutin, Pyrazinamide and Ethambutol in September 2016, and Etravirine was discontinued. After 6 months on ART and 2 months on TB treatment, his HIVVL had dropped to 286 copies/mL, CD4 improved to 179 cells/µL and he showed clinical improvement. Pharmacy supply of his individualised drugs was unreliable and presented some challenges to continuity of treatment. He successfully completed his treatment in June 2017 while still maintaining virological suppression. Discussion: Several laboratory-related factors delayed the diagnosis of TB, including the unavailability of urine-lipoarabinomannan (LAM) and urine-GeneXpert (GXP) tests at this facility. Once the diagnosis was made, it presented a treatment dilemma due to the expected drug-drug interactions between his 3rd-line ART regimen and his INH-resistant TB regimen, and specialist input was required. Conclusion: TB is more difficult to diagnose in patients with severe immunosuppression, therefore additional tests like urine-LAM and urine-GXP can be helpful in expediting the diagnosis in these cases. Patients with non-standard drug regimens should always be discussed with a specialist in order to avoid potentially harmful drug-drug interactions.Keywords: drug-resistance, HIV, line probe assay, tuberculosis
Procedia PDF Downloads 16929873 Numerical and Experimental Analysis of Temperature Distribution and Electric Field in a Natural Rubber Glove during Microwave Heating
Authors: U. Narumitbowonkul, P. Keangin, P. Rattanadecho
Abstract:
Both numerical and experimental investigation of the temperature distribution and electric field in a natural rubber glove (NRG) during microwave heating are studied. A three-dimensional model of NRG and microwave oven are considered in this work. The influences of position, heating time and rotation angle of NRG on temperature distribution and electric field are presented in details. The coupled equations of electromagnetic wave propagation and heat transfer are solved using the finite element method (FEM). The numerical model is validated with an experimental study at a frequency of 2.45 GHz. The results show that the numerical results closely match the experimental results. Furthermore, it is found that the temperature distribution and electric field increases with increasing heating time. The hot spot zone appears in NRG at the tip of middle finger while the maximum temperature occurs in case of rotation angle of NRG = 60 degree. This investigation provides the essential aspects for a fundamental understanding of heat transport of NRG using microwave energy in industry.Keywords: electric field, finite element method, microwave energy, natural rubber glove
Procedia PDF Downloads 26229872 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 8629871 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets
Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu
Abstract:
Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.Keywords: GEO SAR, radar, simulation, ship
Procedia PDF Downloads 17529870 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach
Authors: D. Tedesco, G. Feletti, P. Trucco
Abstract:
The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection
Procedia PDF Downloads 8829869 Localized Detection of ᴅ-Serine by Using an Enzymatic Amperometric Biosensor and Scanning Electrochemical Microscopy
Authors: David Polcari, Samuel C. Perry, Loredano Pollegioni, Matthias Geissler, Janine Mauzeroll
Abstract:
ᴅ-serine acts as an endogenous co-agonist for N-methyl-ᴅ-aspartate receptors in neuronal synapses. This makes it a key component in the development and function of a healthy brain, especially given its role in several neurodegenerative diseases such as Alzheimer’s disease and dementia. Despite such clear research motivations, the primary site and mechanism of ᴅ-serine release is still currently unclear. For this reason, we are developing a biosensor for the detection of ᴅ-serine utilizing a microelectrode in combination with a ᴅ-amino acid oxidase enzyme, which produces stoichiometric quantities of hydrogen peroxide in response to ᴅ-serine. For the fabrication of a biosensor with good selectivity, we use a permselective poly(meta-phenylenediamine) film to ensure only the target molecule is reacted, according to the size exclusion principle. In this work, we investigated the effect of the electrodeposition conditions used on the biosensor’s response time and selectivity. Careful optimization of the fabrication process allowed for enhanced biosensor response time. This allowed for the real time sensing of ᴅ-serine in a bulk solution, and also provided in means to map the efflux of ᴅ-serine in real time. This was done using scanning electrochemical microscopy (SECM) with the optimized biosensor to measure localized release of ᴅ-serine from an agar filled glass capillary sealed in an epoxy puck, which acted as a model system. The SECM area scan simultaneously provided information regarding the rate of ᴅ-serine flux from the model substrate, as well as the size of the substrate itself. This SECM methodology, which provides high spatial and temporal resolution, could be useful to investigate the primary site and mechanism of ᴅ-serine release in other biological samples.Keywords: ᴅ-serine, enzymatic biosensor, microelectrode, scanning electrochemical microscopy
Procedia PDF Downloads 22729868 Isolated Iterating Fractal Independently Corresponds with Light and Foundational Quantum Problems
Authors: Blair D. Macdonald
Abstract:
After nearly one hundred years of its origin, foundational quantum mechanics remains one of the greatest unexplained mysteries in physicists today. Within this time, chaos theory and its geometry, the fractal, has developed. In this paper, the propagation behaviour with an iteration of a simple fractal, the Koch Snowflake, was described and analysed. From an arbitrary observation point within the fractal set, the fractal propagates forward by oscillation—the focus of this study and retrospectively behind by exponential growth from a point beginning. It propagates a potentially infinite exponential oscillating sinusoidal wave of discrete triangle bits sharing many characteristics of light and quantum entities. The model's wave speed is potentially constant, offering insights into the perception and a direction of time where, to an observer, when travelling at the frontier of propagation, time may slow to a stop. In isolation, the fractal is a superposition of component bits where position and scale present a problem of location. In reality, this problem is experienced within fractal landscapes or fields where 'position' is only 'known' by the addition of information or markers. The quantum' measurement problem', 'uncertainty principle,' 'entanglement,' and the classical-quantum interface are addressed; these are a problem of scale invariance associated with isolated fractality. Dual forward and retrospective perspectives of the fractal model offer the opportunity for unification between quantum mechanics and cosmological mathematics, observations, and conjectures. Quantum and cosmological problems may be different aspects of the one fractal geometry.Keywords: measurement problem, observer, entanglement, unification
Procedia PDF Downloads 8929867 Validation of a Fluid-Structure Interaction Model of an Aortic Dissection versus a Bench Top Model
Authors: K. Khanafer
Abstract:
The aim of this investigation was to validate the fluid-structure interaction (FSI) model of type B aortic dissection with our experimental results from a bench-top-model. Another objective was to study the relationship between the size of a septectomy that increases the outflow of the false lumen and its effect on the values of the differential of pressure between true lumen and false lumen. FSI analysis based on Galerkin’s formulation was used in this investigation to study flow pattern and hemodynamics within a flexible type B aortic dissection model using boundary conditions from our experimental data. The numerical results of our model were verified against the experimental data for various tear size and location. Thus, CFD tools have a potential role in evaluating different scenarios and aortic dissection configurations.Keywords: aortic dissection, fluid-structure interaction, in vitro model, numerical
Procedia PDF Downloads 26929866 Application of Data Mining Techniques for Tourism Knowledge Discovery
Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee
Abstract:
Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.Keywords: classification algorithms, data mining, knowledge discovery, tourism
Procedia PDF Downloads 294