Search results for: double robust estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4386

Search results for: double robust estimation

1116 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 71
1115 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form

Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry

Abstract:

Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.

Keywords: hydroxamic acid, related substances, UPLC, valaciclovir

Procedia PDF Downloads 240
1114 Alteration of Placental Development and Vascular Dysfunction in Gestational Diabetes Mellitus Has Impact on Maternal and Infant Health

Authors: Sadia Munir

Abstract:

The aim of this study is to investigate changes in placental development and vascular dysfunction which subsequently affect feto-maternal health in pregnancies complicated by gestational diabetes mellitus (GDM). Fetal and postnatal adverse health outcomes of GDM are shown to be associated with disturbances in placental structure and function. Children of women with GDM are more likely to be obese and diabetic in childhood and adulthood. GDM also increases the risk of adverse pregnancy outcomes, including preeclampsia, birth injuries, macrosomia and neonatal hypoglycemia, respiratory distress syndrome, neonatal cardiac dysfunction and stillbirth. Incidences of type 2 diabetes in the MENA region are growing at an alarming rate which is estimated to become more than double by 2030. Five of the top 10 countries for diabetes prevalence in 2010 were in the Gulf region. GDM also increases the risk of development of type 2 diabetes. Interestingly, more than half of the women with GDM develop diabetes later in their life. The human placenta is a temporary organ located at the interface between mother and fetal blood circulation. Placenta has a central role as both a producer as well as a target of several molecules that are involved in placental development and function. We have investigated performed a Pubmed search with key words placenta, GDM, placental villi, vascularization, cytokines, growth factors, inflammation, hypoxia, oxidative stress and pathophysiology. We have investigated differences in the development and vascularization of placenta, their underlying causes and impact on feto-maternal health through literature review. We have also identified gaps in the literature and research questions that need to be answered to completely understand the central role of placenta in the GDM. This study is important in understanding the pathophysiology of placenta due to changes in the vascularization of villi, surface area and diameter of villous capillaries in pregnancies complicated by GDM. It is necessary to understand these mechanisms in order to develop treatments to reverse their effects on placental malfunctioning, which in turn, will result in improved mother and child health.

Keywords: gestational diabetes mellitus, placenta, vasculature, villi

Procedia PDF Downloads 316
1113 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 195
1112 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: food waste reduction, particle filter, point-of-sales, sustainable development goals, Taylor's law, time series analysis

Procedia PDF Downloads 128
1111 The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks

Authors: Edward Holupka, John Rossman, Tye Morancy, Joseph Aronovitz, Irving Kaplan

Abstract:

A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates.

Keywords: prostate, deep neural network, seed implant, ultrasound

Procedia PDF Downloads 190
1110 Impact of Output Market Participation on Cassava-Based Farming Households' Welfare in Nigeria

Authors: Seyi Olalekan Olawuyi, Abbyssiania Mushunje

Abstract:

The potential benefits of agricultural production to improve the welfare condition of smallholder farmers in developing countries is no more a news because it has been widely documented. Yet majority of these farming households suffer from shortfall in production output to meet both the consumption needs and market demand which adversely affects output market participation and by extension welfare condition. Therefore, this study investigated the impacts of output market participation on households’ welfare of cassava-based farmers in Oyo State, Nigeria. Multistage sampling technique was used to select 324 sample size used for this study. The findings from the data obtained and analyzed through composite score and crosstab analysis revealed that there is varying degree of output market participation among the farmers which also translate to the observed welfare profile differentials in the study area. The probit model analysis with respect to the selection equation identified gender of household head, household size, access to remittance, off-farm income and ownership of farmland as significant drivers of output market participation in the study area. Furthermore, the treatment effect model of the welfare equation and propensity score matching (PSM) technique were used as robust checks; and the findings attest to the fact that, complimentarily with other significant variables highlighted in this study, output market participation indeed has a significant impact on farming households’ welfare. As policy implication inferences, the study recommends female active inclusiveness and empowerment in farming activities, birth control strategies, secondary income smoothing activities and discouragement of land fragmentation habits, to boost productivity and output market participation, which by extension can significantly improve farming households’ welfare.

Keywords: Cassava market participation, households' welfare, propensity score matching, treatment effect model

Procedia PDF Downloads 156
1109 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis

Authors: Syed Toqueer Akhter, Hussain Hamid

Abstract:

Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.

Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model

Procedia PDF Downloads 1018
1108 Estimation of Source Parameters and Moment Tensor Solution through Waveform Modeling of 2013 Kishtwar Earthquake

Authors: Shveta Puri, Shiv Jyoti Pandey, G. M. Bhat, Neha Raina

Abstract:

TheJammu and Kashmir region of the Northwest Himalaya had witnessed many devastating earthquakes in the recent past and has remained unexplored for any kind of seismic investigations except scanty records of the earthquakes that occurred in this region in the past. In this study, we have used local seismic data of year 2013 that was recorded by the network of Broadband Seismographs in J&K. During this period, our seismic stations recorded about 207 earthquakes including two moderate events of Mw 5.7 on 1st May, 2013 and Mw 5.1 of 2nd August, 2013.We analyzed the events of Mw 3-4.6 and the main events only (for minimizing the error) for source parameters, b value and sense of movement through waveform modeling for understanding seismotectonic and seismic hazard of the region. It has been observed that most of the events are bounded between 32.9° N – 33.3° N latitude and 75.4° E – 76.1° E longitudes, Moment Magnitude (Mw) ranges from Mw 3 to 5.7, Source radius (r), from 0.21 to 3.5 km, stress drop, from 1.90 bars to 71.1 bars and Corner frequency, from 0.39 – 6.06 Hz. The b-value for this region was found to be 0.83±0 from these events which are lower than the normal value (b=1), indicating the area is under high stress. The travel time inversion and waveform inversion method suggest focal depth up to 10 km probably above the detachment depth of the Himalayan region. Moment tensor solution of the (Mw 5.1, 02:32:47 UTC) main event of 2ndAugust suggested that the source fault is striking at 295° with dip of 33° and rake value of 85°. It was found that these events form intense clustering of small to moderate events within a narrow zone between Panjal Thrust and Kishtwar Window. Moment tensor solution of the main events and their aftershocks indicating thrust type of movement is occurring in this region.

Keywords: b-value, moment tensor, seismotectonics, source parameters

Procedia PDF Downloads 308
1107 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame

Authors: Seyed Saeid Tabaee, Omid Bahar

Abstract:

Nowadays, using energy dissipation devices has been commonly used in structures. A high rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely complicate analysis and design of such structures. This effect may be generally represented by equivalent viscous damping. The equivalent viscous damping may be obtained from the expected hysteretic behavior under the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel moment resisting frame (MRF), which its performance is enhanced by a buckling restrained brace (BRB) system has been evaluated. Having the foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural frequency of the system. Two steel moment frame structures, one equipped with BRB, and the other without BRB are simultaneously studied. The extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, the contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.

Keywords: buckling restrained brace, direct displacement based design, dual systems, hysteretic damping, moment resisting frames

Procedia PDF Downloads 427
1106 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 112
1105 Permeability Prediction Based on Hydraulic Flow Unit Identification and Artificial Neural Networks

Authors: Emad A. Mohammed

Abstract:

The concept of hydraulic flow units (HFU) has been used for decades in the petroleum industry to improve the prediction of permeability. This concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir rock quality index (RQI). Both indices are based on reservoir porosity and permeability of core samples. It is assumed that core samples with similar FZI values belong to the same HFU. Thus, after dividing the porosity-permeability data based on the HFU, transformations can be done in order to estimate the permeability from the porosity. The conventional practice is to use the power law transformation using conventional HFU where percentage of error is considerably high. In this paper, neural network technique is employed as a soft computing transformation method to predict permeability instead of power law method to avoid higher percentage of error. This technique is based on HFU identification where Amaefule et al. (1993) method is utilized. In this regard, Kozeny and Carman (K–C) model, and modified K–C model by Hasan and Hossain (2011) are employed. A comparison is made between the two transformation techniques for the two porosity-permeability models. Results show that the modified K-C model helps in getting better results with lower percentage of error in predicting permeability. The results also show that the use of artificial intelligence techniques give more accurate prediction than power law method. This study was conducted on a heterogeneous complex carbonate reservoir in Oman. Data were collected from seven wells to obtain the permeability correlations for the whole field. The findings of this study will help in getting better estimation of permeability of a complex reservoir.

Keywords: permeability, hydraulic flow units, artificial intelligence, correlation

Procedia PDF Downloads 130
1104 Latent Heat Storage Using Phase Change Materials

Authors: Debashree Ghosh, Preethi Sridhar, Shloka Atul Dhavle

Abstract:

The judicious and economic consumption of energy for sustainable growth and development is nowadays a thing of primary importance; Phase Change Materials (PCM) provide an ingenious option of storing energy in the form of Latent Heat. Energy storing mechanism incorporating phase change material increases the efficiency of the process by minimizing the difference between supply and demand; PCM heat exchangers are used to storing the heat or non-convectional energy within the PCM as the heat of fusion. The experimental study evaluates the effect of thermo-physical properties, variation in inlet temperature, and flow rate on charging period of a coiled heat exchanger. Secondly, a numerical study is performed on a PCM double pipe heat exchanger packed with two different PCMs, namely, RT50 and Fatty Acid, in the annular region. In this work, the simulation of charging of paraffin wax (RT50) using water as high-temperature fluid (HTF) is performed. Commercial software Ansys-Fluent 15 is used for simulation, and hence charging of PCM is studied. In the Enthalpy-porosity model, a single momentum equation is applicable to describe the motion of both solid and liquid phases. The details of the progress of phase change with time are presented through the contours of melt-fraction, temperature. The velocity contour is shown to describe the motion of the liquid phase. The experimental study revealed that paraffin wax melts with almost the same temperature variation at the two Intermediate positions. Fatty acid, on the other hand, melts faster owing to greater thermal conductivity and low melting temperature. It was also observed that an increase in flow rate leads to a reduction in the charging period. The numerical study also supports some of the observations found in the experimental study like the significant dependence of driving force on the process of melting. The numerical study also clarifies the melting pattern of the PCM, which cannot be observed in the experimental study.

Keywords: latent heat storage, charging period, discharging period, coiled heat exchanger

Procedia PDF Downloads 111
1103 The Beacon of Collective Hope: Mixed Method Study on the Participation of Indian Youth with Regard to Mass Demonstrations Fueled by Social Activism Media

Authors: Akanksha Lohmore, Devanshu Arya, Preeti Kapur

Abstract:

Rarely does the human mind look at the positive fallout of highly negative events. Positive psychology attempts to emphasize on the strengths and positives for human well-being. The present study examines the underpinning socio-cognitive factors of the protest movements regarding the gang rape case of December 16th, 2012 through the lens of positive psychology. A gamut of negative emotions came to the forum globally: of anger, shame, hatred, violence, death penalty for the perpetrators, amongst other equally strong. In relation to this incident, a number of questions can be raised. Can such a heinous crime have some positive inputs for contemporary society? What is it that has held people to protests for long even when they see faded lines of success in view? This paper explains the constant feeding of protests and continuation of movements by the robust model of Collective Hope by Snyder, a phenomenon unexplored by social psychologists. In this paper, mixed method approach was undertaken. Results confirmed the interaction of various socio-psychological factors that imitated the Snyders model of collective hope. Emergence of major themes was: Sense of Agency, Sense of Worthiness, Social Sharing and Common Grievances and Hope of Collective Efficacy. Statistical analysis (correlation and regression) showed significant relationship between media usage and occurrence of these themes among participants. Media-communication processes and educational theories for development of citizenship behavior can find implications from these results. Theory development as indicated by theorists working in the area of Social Psychology of Protests can be furthered by the direction of research.

Keywords: agency, collective, hope, positive psychology, protest, social media

Procedia PDF Downloads 349
1102 Opportunities for Lesbian/Gay/Bisexual/Transgender/Queer/Questioning Tourism in Vietnam

Authors: Eric D. Olson

Abstract:

The lesbian/gay/bisexual/transgender/queer/questioning tourist (LGBTQ+) travels more frequently, spends more money on travel, and is more likely to travel internationally compared to their straight/heterosexual counterparts. For Vietnam, this represents a huge opportunity to increase international tourism, considering social advancements and recognition of the LGBTQ+ have greatly increased in the past few years in Vietnam. For example, Vietnam’s Health Ministry confirmed in 2022 that same-sex attraction and being transgender is not a mental health condition. A robust hospitality ecosystem of LGBTQ+ tourism suppliers already exists in Vietnam catering to LGBTQ+ tourists (e.g., Gay Hanoi Tours, VietPride). Vietnam is a safe and welcoming destination with incredible nature, cosmopolitan cities, and friendly people; however, there is a dearth of academic and industry research that has examined how LGBTQ+ international tourists perceive Vietnam as an LGBTQ+ friendly destination. To rectify this gap, this research examines Vietnam as an LGBTQ+ destination in order to provide government officials, destination marketers, and industry practitioners with insight into this increasingly visible tourist market segment. A self-administered survey instrument was administered to n=375 international LGBTQ+ tourists to examine their perceptions of Vietnam. A factor analysis found three categories of LGBTQ+ factors of visitation to Vietnam: safety and security (Eigenvalue = 4.12, variance = 32.45, α = .82); LGBTQ+ attractions (Eigenvalue = 3.65 variance = 24.23, α = .75); and friendly interactions (Eigenvalue = 3.71, variance = 10.45, α = .96). Multiple regression was used to examine LGBTQ+ visitation factors and intention to visit Vietnam, F=12.20 (2, 127), p < .001, R2 = .56. Safety and security (β = 0.42, p < .001), LGBTQ+ attractions (β = 0.61, p < .001) and friendly interactions (β = 0.42, p < .001) are predictors to visit Vietnam. Results are consistent with previous research that highlight safety/security is of utmost importance to the community when traveling. Attractions, such as LGBTQ+ tours, suppliers, and festivals can also be used as a pull factor in encouraging tourism. Implications/limitations will be discussed.

Keywords: tourism, LGBTQ, vietnam, regression

Procedia PDF Downloads 65
1101 Real-Time Hybrid Simulation for a Tuned Liquid Column Damper Implementation

Authors: Carlos Riascos, Peter Thomson

Abstract:

Real-time hybrid simulation (RTHS) is a modern cyber-physical technique used for the experimental evaluation of complex systems, that treats the system components with predictable behavior as a numerical substructure and the components that are difficult to model as an experimental substructure. Therefore it is an attractive method for evaluation of the response of civil structures under earthquake, wind and anthropic loads. Another practical application of RTHS is the evaluation of control systems, as these devices are often nonlinear and their characterization is an important step in the design of controllers with the desired performance. In this paper, the response of three-story shear frame controlled by a tuned liquid column damper (TLCD) and subject to base excitation is considered. Both passive and semi-active control strategies were implemented and are compared. While the passive TLCD achieved a reduction of 50% in the acceleration response of the main structure in comparison with the structure without control, the semi-active TLCD achieved a reduction of 70%, and was robust to variations in the dynamic properties of the main structure. In addition, a RTHS was implemented with the main structure modeled as a linear, time-invariant (LTI) system through a state space representation and the TLCD, with both control strategies, was evaluated on a shake table that reproduced the displacement of the virtual structure. Current assessment measures for RTHS were used to quantify the performance with parameters such as generalized amplitude, equivalent time delay between the target and measured displacement of the shake table, and energy error using the measured force, and prove that the RTHS described in this paper is an accurate method for the experimental evaluation of structural control systems.

Keywords: structural control, hybrid simulation, tuned liquid column damper, semi-active sontrol strategy

Procedia PDF Downloads 293
1100 Evaluating Robustness of Conceptual Rainfall-runoff Models under Climate Variability in Northern Tunisia

Authors: H. Dakhlaoui, D. Ruelland, Y. Tramblay, Z. Bargaoui

Abstract:

To evaluate the impact of climate change on water resources at the catchment scale, not only future projections of climate are necessary but also robust rainfall-runoff models that are able to be fairly reliable under changing climate conditions. This study aims at assessing the robustness of three conceptual rainfall-runoff models (GR4j, HBV and IHACRES) on five basins in Northern Tunisia under long-term climate variability. Their robustness was evaluated according to a differential split sample test based on a climate classification of the observation period regarding simultaneously precipitation and temperature conditions. The studied catchments are situated in a region where climate change is likely to have significant impacts on runoff and they already suffer from scarcity of water resources. They cover the main hydrographical basins of Northern Tunisia (High Medjerda, Zouaraâ, Ichkeul and Cap bon), which produce the majority of surface water resources in Tunisia. The streamflow regime of the basins can be considered as natural since these basins are located upstream from storage-dams and in areas where withdrawals are negligible. A 30-year common period (1970‒2000) was considered to capture a large spread of hydro-climatic conditions. The calibration was based on the Kling-Gupta Efficiency (KGE) criterion, while the evaluation of model transferability is performed according to the Nash-Suttfliff efficiency criterion and volume error. The three hydrological models were shown to have similar behaviour under climate variability. Models prove a better ability to simulate the runoff pattern when transferred toward wetter periods compared to the case when transferred to drier periods. The limits of transferability are beyond -20% of precipitation and +1.5 °C of temperature in comparison with the calibration period. The deterioration of model robustness could in part be explained by the climate dependency of some parameters.

Keywords: rainfall-runoff modelling, hydro-climate variability, model robustness, uncertainty, Tunisia

Procedia PDF Downloads 289
1099 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 253
1098 Thermal Effects on Wellbore Stability and Fluid Loss in High-Temperature Geothermal Drilling

Authors: Mubarek Alpkiray, Tan Nguyen, Arild Saasen

Abstract:

Geothermal drilling operations contain numerous challenges that are encountered to increase the well cost and nonproductive time. Fluid loss is one of the most undesirable troublesome that can cause well abandonment in geothermal drilling. Lost circulation can be seen due to natural fractures, high mud weight, and extremely high formation temperatures. This challenge may cause wellbore stability problems and lead to expensive drilling operations. Wellbore stability is the main domain that should be considered to mitigate or prevent fluid loss into the formation. This paper describes the causes of fluid loss in the Pamukoren geothermal field in Turkey. A geomechanics approach integration and assessment is applied to help the understanding of fluid loss problems. In geothermal drillings, geomechanics is primarily based on rock properties, in-situ stress characterization, the temperature of the rock, determination of stresses around the wellbore, and rock failure criteria. Since a high-temperature difference between the wellbore wall and drilling fluid is presented, temperature distribution through the wellbore is estimated and implemented to the wellbore stability approach. This study reviewed geothermal drilling data to analyze temperature estimation along the wellbore, the cause of fluid loss and stored electric capacity of the reservoir. Our observation demonstrates the geomechanical approach's significant role in understanding safe drilling operations on high-temperature wells. Fluid loss is encountered due to thermal stress effects around the borehole. This paper provides a wellbore stability analysis for a geothermal drilling operation to discuss the causes of lost circulation resulting in nonproductive time and cost.

Keywords: geothermal wells, drilling, wellbore stresses, drilling fluid loss, thermal stress

Procedia PDF Downloads 185
1097 Monitoring Surface Modification of Polylactide Nonwoven Fabric with Weak Polyelectrolytes

Authors: Sima Shakoorjavan, Dawid Stawski, Somaye Akbari

Abstract:

In this study, great attempts have been made to initially modify polylactide (PLA) nonwoven surface with poly(amidoamine) (PAMMA) dendritic polymer to create amine active sites on PLA surface through aminolysis reaction. Further, layer-by-layer deposition of four layers of two weak polyelectrolytes, including PAMAM as polycation and polyacrylic acid (PAA) as polyanion on activated PLA, was monitored with turbidity analysis of waste-polyelectrolytes after each deposition step. The FTIR-ATR analysis confirmed the successful introduction of amine groups into PLA polymeric chains through the emerging peak around 1650 cm⁻¹ corresponding to N-H bending vibration and a double wide peak at around 3670-3170 cm⁻¹ corresponding to N-H stretching vibration. The adsorption-desorption behavior of (PAMAM) and poly (PAA) deposition was monitored by turbidity test. Turbidity results showed the desorption and removal of the previously deposited layer (second and third layers) upon the desorption of the next layers (third and fourth layers). Also, the importance of proper rinsing after aminolysis of PLA nonwoven fabric was revealed by turbidity test. Regarding the sample with insufficient rinsing process, higher desorption and removal of ungrafted PAMAM from aminolyzed-PLA surface into PAA solution was detected upon the deposition of the first PAA layer. This phenomenon can be due to electrostatic attraction between polycation (PAMAM) and polyanion (PAA). Moreover, the successful layer deposition through LBL was confirmed by the staining test of acid red 1 through spectrophotometry analysis. According to the results, layered PLA with four layers with PAMAM as the top layer showed higher dye absorption (46.7%) than neat (1.2%) and aminolyzed PLA (21.7%). In conclusion, the complicated adsorption-desorption behavior of dendritic polycation and linear polyanion systems was observed. Although desorption and removal of previously adsorbed layers occurred upon the deposition of the next layer, the remaining polyelectrolyte on the substrate is sufficient for the adsorption of the next polyelectrolyte through electrostatic attraction between oppositely charged polyelectrolytes. Also, an increase in dye adsorption confirmed more introduction of PAMAM onto PLA surface through LBL.

Keywords: surface modification, layer-by-layer technique, weak polyelectrolytes, adsorption-desorption behavior

Procedia PDF Downloads 57
1096 Radiation Annealing of Radiation Embrittlement of the Reactor Pressure Vessel

Authors: E. A. Krasikov

Abstract:

Influence of neutron irradiation on RPV steel degradation are examined with reference to the possible reasons of the substantial experimental data scatter and furthermore – nonstandard (non-monotonous) and oscillatory embrittlement behavior. In our glance, this phenomenon may be explained by presence of the wavelike component in the embrittlement kinetics. We suppose that the main factor affecting steel anomalous embrittlement is fast neutron intensity (dose rate or flux), flux effect manifestation depends on state-of-the-art fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Data on radiation damage change including through the ex-service RPVs taking into account chemical factor, fast neutron fluence and neutron flux were obtained and analyzed. In our opinion, controversy in the estimation on neutron flux on radiation degradation impact may be explained by presence of the wavelike component in the embrittlement kinetics. Therefore, flux effect manifestation depends on fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Moreover as a hypothesis we suppose that at some stages of irradiation damaged metal have to be partially restored by irradiation i.e. neutron bombardment. Nascent during irradiation structure undergo occurring once or periodically transformation in a direction both degradation and recovery of the initial properties. According to our hypothesis, at some stage(s) of metal structure degradation neutron bombardment became recovering factor. As a result, oscillation arises that in turn leads to enhanced data scatter.

Keywords: annealing, embrittlement, radiation, RPV steel

Procedia PDF Downloads 336
1095 Determination and Distribution of Formation Thickness Using Seismic and Well Data in Baga/Lake Sub-basin, Chad Basin Nigeria

Authors: Gabriel Efomeh Omolaiye, Olatunji Seminu, Jimoh Ajadi, Yusuf Ayoola Jimoh

Abstract:

The Nigerian part of the Chad Basin till date has been one of the few critically studied basins, with few published scholarly works, compared to other basins such as Niger Delta, Dahomey, etc. This work was undertaken by the integration of 3D seismic interpretations and the well data analysis of eight wells fairly distributed in block A, Baga/Lake sub-basin in Borno basin with the aim of determining the thickness of Chad, Kerri-Kerri, Fika, and Gongila Formations in the sub-basin. Da-1 well (type-well) used in this study was subdivided into stratigraphic units based on the regional stratigraphic subdivision of the Chad basin and was later correlated with other wells using similarity of observed log responses. The combined density and sonic logs were used to generate synthetic seismograms for seismic to well ties. Five horizons were mapped, representing the tops of the formations on the 3D seismic data covering the block; average velocity function with maximum error/residual of 0.48% was adopted in the time to depth conversion of all the generated maps. There is a general thickening of sediments from the west to the east, and the estimated thicknesses of the various formations in the Baga/Lake sub-basin are Chad Formation (400-750 m), Kerri-Kerri Formation (300-1200 m), Fika Formation (300-2200 m) and Gongila Formation (100-1300 m). The thickness of the Bima Formation could not be established because the deepest well (Da-1) terminates within the formation. This is a modification to the previous and widely referenced studies of over forty decades that based the estimation of formation thickness within the study area on the observed outcrops at different locations and the use of few well data.

Keywords: Baga/Lake sub-basin, Chad basin, formation thickness, seismic, velocity

Procedia PDF Downloads 173
1094 Downtime Modelling for the Post-Earthquake Building Assessment Phase

Authors: S. Khakurel, R. P. Dhakal, T. Z. Yeow

Abstract:

Downtime is one of the major sources (alongside damage and injury/death) of financial loss incurred by a structure in an earthquake. The length of downtime associated with a building after an earthquake varies depending on the time taken for the reaction (to the earthquake), decision (on the future course of action) and execution (of the decided course of action) phases. Post-earthquake assessment of buildings is a key step in the decision making process to decide the appropriate safety placarding as well as to decide whether a damaged building is to be repaired or demolished. The aim of the present study is to develop a model to quantify downtime associated with the post-earthquake building-assessment phase in terms of two parameters; i) duration of the different assessment phase; and ii) probability of different colour tagging. Post-earthquake assessment of buildings includes three stages; Level 1 Rapid Assessment including a fast external inspection shortly after the earthquake, Level 2 Rapid Assessment including a visit inside the building and Detailed Engineering Evaluation (if needed). In this study, the durations of all three assessment phases are first estimated from the total number of damaged buildings, total number of available engineers and the average time needed for assessing each building. Then, probability of different tag colours is computed from the 2010-11 Canterbury earthquake Sequence database. Finally, a downtime model for the post-earthquake building inspection phase is proposed based on the estimated phase length and probability of tag colours. This model is expected to be used for rapid estimation of seismic downtime within the Loss Optimisation Seismic Design (LOSD) framework.

Keywords: assessment, downtime, LOSD, Loss Optimisation Seismic Design, phase length, tag color

Procedia PDF Downloads 177
1093 A Microsurgery-Specific End-Effector Equipped with a Bipolar Surgical Tool and Haptic Feedback

Authors: Hamidreza Hoshyarmanesh, Sanju Lama, Garnette R. Sutherland

Abstract:

In tele-operative robotic surgery, an ideal haptic device should be equipped with an intuitive and smooth end-effector to cover the surgeon’s hand/wrist degrees of freedom (DOF) and translate the hand joint motions to the end-effector of the remote manipulator with low effort and high level of comfort. This research introduces the design and development of a microsurgery-specific end-effector, a gimbal mechanism possessing 4 passive and 1 active DOFs, equipped with a bipolar forceps and haptic feedback. The robust gimbal structure is comprised of three light-weight links/joint, pitch, yaw, and roll, each consisting of low-friction support and a 2-channel accurate optical position sensor. The third link, which provides the tool roll, was specifically designed to grip the tool prongs and accommodate a low mass geared actuator together with a miniaturized capstan-rope mechanism. The actuator is able to generate delicate torques, using a threaded cylindrical capstan, to emulate the sense of pinch/coagulation during conventional microsurgery. While the tool left prong is fixed to the rolling link, the right prong bears a miniaturized drum sector with a large diameter to expand the force scale and resolution. The drum transmits the actuator output torque to the right prong and generates haptic force feedback at the tool level. The tool is also equipped with a hall-effect sensor and magnet bar installed vis-à-vis on the inner side of the two prongs to measure the tooltip distance and provide an analogue signal to the control system. We believe that such a haptic end-effector could significantly increase the accuracy of telerobotic surgery and help avoid high forces that are known to cause bleeding/injury.

Keywords: end-effector, force generation, haptic interface, robotic surgery, surgical tool, tele-operation

Procedia PDF Downloads 113
1092 Predicting Returns Volatilities and Correlations of Stock Indices Using Multivariate Conditional Autoregressive Range and Return Models

Authors: Shay Kee Tan, Kok Haur Ng, Jennifer So-Kuen Chan

Abstract:

This paper extends the conditional autoregressive range (CARR) model to multivariate CARR (MCARR) model and further to the two-stage MCARR-return model to model and forecast volatilities, correlations and returns of multiple financial assets. The first stage model fits the scaled realised Parkinson volatility measures using individual series and their pairwise sums of indices to the MCARR model to obtain in-sample estimates and forecasts of volatilities for these individual and pairwise sum series. Then covariances are calculated to construct the fitted variance-covariance matrix of returns which are imputed into the stage-two return model to capture the heteroskedasticity of assets’ returns. We investigate different choices of mean functions to describe the volatility dynamics. Empirical applications are based on the Standard and Poor 500, Dow Jones Industrial Average and Dow Jones United States Financial Service Indices. Results show that the stage-one MCARR models using asymmetric mean functions give better in-sample model fits than those based on symmetric mean functions. They also provide better out-of-sample volatility forecasts than those using CARR models based on two robust loss functions with the scaled realised open-to-close volatility measure as the proxy for the unobserved true volatility. We also find that the stage-two return models with constant means and multivariate Student-t errors give better in-sample fits than the Baba, Engle, Kraft, and Kroner type of generalized autoregressive conditional heteroskedasticity (BEKK-GARCH) models. The estimates and forecasts of value-at-risk (VaR) and conditional VaR based on the best MCARR-return models for each asset are provided and tested using Kupiec test to confirm the accuracy of the VaR forecasts.

Keywords: range-based volatility, correlation, multivariate CARR-return model, value-at-risk, conditional value-at-risk

Procedia PDF Downloads 95
1091 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds

Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott

Abstract:

Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.

Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)

Procedia PDF Downloads 367
1090 American Sign Language Recognition System

Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba

Abstract:

The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.

Keywords: sign language, computer vision, vision transformer, VGG16, CNN

Procedia PDF Downloads 33
1089 A Tool for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: digital information management, file format, endangerment analysis, fuzzy models

Procedia PDF Downloads 396
1088 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 242
1087 Economic Evaluation of Cataract Eye Surgery by Health Attendant of Doctor and Nurse through the Social Insurance Board Cadr at General Hospital Anutapura Palu Central Sulawesi Indonesia

Authors: Sitti Rahmawati

Abstract:

Payment system of cataract surgery implemented by professional attendant of doctor and nurse has been increasing, through health insurance program and this has become one of the factors that affects a lot of government in the budget establishment. This system has been implemented in purpose of quality and expenditure control, i.e., controlling health overpayment to obtain benefit (moral hazard) by the user of insurance or health service provider. The increasing health cost becomes the main issue that hampers the society to receive required health service in cash payment-system. One of the efforts that should be taken by the government in health payment is by securing health insurance through society's health insurance. The objective of the study is to learn the capability of a patient to pay cataract eye operation for the elders. Method of study sample population in this study was patients who obtain health insurance board card for the society that was started in the first of tri-semester (January-March) 2015 and claimed in Indonesian software-Case Based Group as a purposive sampling of 40 patients. Results of the study show that total unit cost analysis of surgery service unit was obtained $75 for unit cost without AFC and salary of nurse and doctor. The operation tariff that has been implemented today at Anutapura hospitals in eye department is tariff without AFC and the salary of the employee is $80. The operation tariff of the unit cost calculation with double distribution model at $65. Conclusion, the calculation result of actual unit cost that is much greater causes incentive distribution system provided to an ophthalmologist at $37 and nurse at $20 for one operation. The surgery service tariff is still low; consequently, the hospital receives low revenue and the quality of health insurance in eye operation department is relatively low. In purpose of increasing the service quality, it requires adequately high cost to equip medical equipment and increase the number of professional health attendant in serving patients in cataract eye operation at hospital.

Keywords: economic evaluation, cataract operation, health attendant, health insurance system

Procedia PDF Downloads 164