Search results for: predictive accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4531

Search results for: predictive accuracy

631 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 489
630 UEFA Super Cup: Economic Effects on Georgian Economy

Authors: Giorgi Bregadze

Abstract:

Tourism is the most viable and sustainable economic development option for Georgia and one of the main sources of foreign exchange earnings. Events are considered as one of the most effective ways to attract foreign visitors to the country, and, recently, the government of Georgia has begun investing in this sector very actively. This article stresses the necessity of research based economic policy in the tourism sector. In this regard, it is of paramount importance to measure the economic effects of the events which are subsidized by taxpayers’ money. The economic effect of events can be analyzed from two perspectives; financial perspective of the government and perspective of economic effects of the tourism administration. The article emphasizes more realistic and all-inclusive focus of the economic effect analysis of the tourism administration as it concentrates on the income of residents and local businesses, part of which generate tax revenues for the government. The public would like to know what the economic returns to investment are. In this article, the methodology used to describe the economic effects of UEFA Super Cup held in Tbilisi, will help to answer this question. Methodology is based on three main principles and covers three stages. Using the suggested methodology article estimates the direct economic effect of UEFA Super cup on Georgian economy. Although the attempt to make an economic effect analysis of the event was successful in Georgia, some obstacles and insufficiencies were identified during the survey. The article offers several recommendations that will help to refine methodology and improve the accuracy of the data. Furthermore, it is very important to receive the correct standard of measurement of events in Georgia. In this caseü non-ethical acts of measurement which are widely utilized by different research companies will not trigger others to show overestimated effects. It is worth mentioning that to author’s best knowledge, this is the first attempt to measure the economic effect of an event held in Georgia.

Keywords: biased economic effect analysis, expenditure of local citizens, time switchers and casuals, UEFA super cup

Procedia PDF Downloads 152
629 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 71
628 Downscaling Grace Gravity Models Using Spectral Combination Techniques for Terrestrial Water Storage and Groundwater Storage Estimation

Authors: Farzam Fatolazadeh, Kalifa Goita, Mehdi Eshagh, Shusen Wang

Abstract:

The Gravity Recovery and Climate Experiment (GRACE) is a satellite mission with twin satellites for the precise determination of spatial and temporal variations in the Earth’s gravity field. The products of this mission are monthly global gravity models containing the spherical harmonic coefficients and their errors. These GRACE models can be used for estimating terrestrial water storage (TWS) variations across the globe at large scales, thereby offering an opportunity for surface and groundwater storage (GWS) assessments. Yet, the ability of GRACE to monitor changes at smaller scales is too limited for local water management authorities. This is largely due to the low spatial and temporal resolutions of its models (~200,000 km2 and one month, respectively). High-resolution GRACE data products would substantially enrich the information that is needed by local-scale decision-makers while offering the data for the regions that lack adequate in situ monitoring networks, including northern parts of Canada. Such products could eventually be obtained through downscaling. In this study, we extended the spectral combination theory to simultaneously downscale spatiotemporally the 3o spatial coarse resolution of GRACE to 0.25o degrees resolution and monthly coarse resolution to daily resolution. This method combines the monthly gravity field solution of GRACE and daily hydrological model products in the form of both low and high-frequency signals to produce high spatiotemporal resolution TWSA and GWSA products. The main contribution and originality of this study are to comprehensively and simultaneously consider GRACE and hydrological variables and their uncertainties to form the estimator in the spectral domain. Therefore, it is predicted that we reach downscale products with an acceptable accuracy.

Keywords: GRACE satellite, groundwater storage, spectral combination, terrestrial water storage

Procedia PDF Downloads 83
627 Evaluation of Hepatic Metabolite Changes for Differentiation Between Non-Alcoholic Steatohepatitis and Simple Hepatic Steatosis Using Long Echo-Time Proton Magnetic Resonance Spectroscopy

Authors: Tae-Hoon Kim, Kwon-Ha Yoon, Hong Young Jun, Ki-Jong Kim, Young Hwan Lee, Myeung Su Lee, Keum Ha Choi, Ki Jung Yun, Eun Young Cho, Yong-Yeon Jeong, Chung-Hwan Jun

Abstract:

Purpose: To assess the changes of hepatic metabolite for differentiation between non-alcoholic steatohepatitis (NASH) and simple steatosis on proton magnetic resonance spectroscopy (1H-MRS) in both humans and animal model. Methods: The local institutional review board approved this study and subjects gave written informed consent. 1H-MRS measurements were performed on a localized voxel of the liver using a point-resolved spectroscopy (PRESS) sequence and hepatic metabolites of alanine (Ala), lactate/triglyceride (Lac/TG), and TG were analyzed in NASH, simple steatosis and control groups. The group difference was tested with the ANOVA and Tukey’s post-hoc tests, and diagnostic accuracy was tested by calculating the area under the receiver operating characteristics (ROC) curve. The associations between metabolic concentration and pathologic grades or non-alcoholic fatty liver disease(NAFLD) activity scores were assessed by the Pearson’s correlation. Results: Patient with NASH showed the elevated Ala(p<0.001), Lac/TG(p < 0.001), TG(p < 0.05) concentration when compared with patients who had simple steatosis and healthy controls. The NASH patients were higher levels in Ala(mean±SEM, 52.5±8.3 vs 2.0±0.9; p < 0.001), Lac/TG(824.0±168.2 vs 394.1±89.8; p < 0.05) than simple steatosis. The area under the ROC curve to distinguish NASH from simple steatosis was 1.00 (95% confidence interval; 1.00, 1.00) with Ala and 0.782 (95% confidence interval; 0.61, 0.96) with Lac/TG. The Ala and Lac/TG levels were well correlated with steatosis grade, lobular inflammation, and NAFLD activity scores. The metabolic changes in human were reproducible to a mice model induced by streptozotocin injection and a high-fat diet. Conclusion: 1H-MRS would be useful for differentiation of patients with NASH and simple hepatic steatosis.

Keywords: non-alcoholic fatty liver disease, non-alcoholic steatohepatitis, 1H MR spectroscopy, hepatic metabolites

Procedia PDF Downloads 326
626 Numerical Investigation of Dynamic Stall over a Wind Turbine Pitching Airfoil by Using OpenFOAM

Authors: Mahbod Seyednia, Shidvash Vakilipour, Mehran Masdari

Abstract:

Computations for two-dimensional flow past a stationary and harmonically pitching wind turbine airfoil at a moderate value of Reynolds number (400000) are carried out by progressively increasing the angle of attack for stationary airfoil and at fixed pitching frequencies for rotary one. The incompressible Navier-Stokes equations in conjunction with Unsteady Reynolds Average Navier-Stokes (URANS) equations for turbulence modeling are solved by OpenFOAM package to investigate the aerodynamic phenomena occurred at stationary and pitching conditions on a NACA 6-series wind turbine airfoil. The aim of this study is to enhance the accuracy of numerical simulation in predicting the aerodynamic behavior of an oscillating airfoil in OpenFOAM. Hence, for turbulence modelling, k-ω-SST with low-Reynolds correction is employed to capture the unsteady phenomena occurred in stationary and oscillating motion of the airfoil. Using aerodynamic and pressure coefficients along with flow patterns, the unsteady aerodynamics at pre-, near-, and post-static stall regions are analyzed in harmonically pitching airfoil, and the results are validated with the corresponding experimental data possessed by the authors. The results indicate that implementing the mentioned turbulence model leads to accurate prediction of the angle of static stall for stationary airfoil and flow separation, dynamic stall phenomenon, and reattachment of the flow on the surface of airfoil for pitching one. Due to the geometry of the studied 6-series airfoil, the vortex on the upper surface of the airfoil during upstrokes is formed at the trailing edge. Therefore, the pattern flow obtained by our numerical simulations represents the formation and change of the trailing-edge vortex at near- and post-stall regions where this process determines the dynamic stall phenomenon.

Keywords: CFD, moderate Reynolds number, OpenFOAM, pitching oscillation, unsteady aerodynamics, wind turbine

Procedia PDF Downloads 203
625 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 176
624 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 150
623 Financial Performance Model of Local Economic Enterprises in Matalam, Cotabato

Authors: Kristel Faye Tandog

Abstract:

The State Owned Enterprise (SOE) or also called Public Enterprise (PE) has been playing a vital role in a country’s social and economic development. Following this idea, this study focused on the Factor Structures of Financial Performance of the Local Economic Enterprises (LEEs) namely: Food Court, Market, Slaughterhouse, and Terminal in Matalam, Cotabato. It aimed to determine the profile of the LEEs in terms of organizational structure, manner of creation, years in operation, source of initial operating requirements, annual operating budget, geographical location, and size or description of the facility. This study also included the different financial ratios of LEE that covered a five year period from Calendar Year 2009 to 2013. Primary data using survey questionnaire was administered to 468 respondents and secondary data were sourced out from the government archives and financial documents of the said LGU. There were 12 dominant factors identified namely: “management”, “enforcement of laws”, “strategic location”, “existence of non-formal competitors”, “proper maintenance”, “pricing”, “customer service”, “collection process”, “rentals and services”, “efficient use of resources”, “staffing”, and “timeliness and accuracy”. On the other hand, the financial performance of the LEE of Matalam, Cotabato using financial ratios needs reformatting. This denotes that refinement as to the following ratios: Cash Flow Indicator, Activity, Profitability and Growth is necessary. The cash flow indicator ratio showed difficulty in covering its debts in successive years. Likewise, the activity ratios showed that the LEE had not been effective in putting its investment at work. Moreover, profitability ratios revealed that it had operated in minimum capacity and had incurred net losses and thus, it had a weak profit performance. Furthermore, growth ratios showed that LEE had a declining growth trend particularly in net income.

Keywords: factor structures, financial performance, financial ratios, state owned enterprises

Procedia PDF Downloads 255
622 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company

Authors: Rahma Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS

Procedia PDF Downloads 207
621 Investigation of User Position Accuracy for Stand-Alone and Hybrid Modes of the Indian Navigation with Indian Constellation Satellite System

Authors: Naveen Kumar Perumalla, Devadas Kuna, Mohammed Akhter Ali

Abstract:

Satellite Navigation System such as the United States Global Positioning System (GPS) plays a significant role in determining the user position. Similar to that of GPS, Indian Regional Navigation Satellite System (IRNSS) is a Satellite Navigation System indigenously developed by Indian Space Research Organization (ISRO), India, to meet the country’s navigation applications. This system is also known as Navigation with Indian Constellation (NavIC). The NavIC system’s main objective, is to offer Positioning, Navigation and Timing (PNT) services to users in its two service areas i.e., covering the Indian landmass and the Indian Ocean. Six NavIC satellites are already deployed in the space and their receivers are in the performance evaluation stage. Four NavIC dual frequency receivers are installed in the ‘Advanced GNSS Research Laboratory’ (AGRL) in the Department of Electronics and Communication Engineering, University College of Engineering, Osmania University, India. The NavIC receivers can be operated in two positioning modes: Stand-alone IRNSS and Hybrid (IRNSS+GPS) modes. In this paper, analysis of various parameters such as Dilution of Precision (DoP), three Dimension (3D) Root Mean Square (RMS) Position Error and Horizontal Position Error with respect to Visibility of Satellites is being carried out using the real-time IRNSS data, obtained by operating the receiver in both positioning modes. Two typical days (6th July 2017 and 7th July 2017) are considered for Hyderabad (Latitude-17°24'28.07’N, Longitude-78°31'4.26’E) station are analyzed. It is found that with respect to the considered parameters, the Hybrid mode operation of NavIC receiver is giving better results than that of the standalone positioning mode. This work finds application in development of NavIC receivers for civilian navigation applications.

Keywords: DoP, GPS, IRNSS, GNSS, position error, satellite visibility

Procedia PDF Downloads 213
620 Bioclimatic Niches of Endangered Garcinia indica Species on the Western Ghats: Predicting Habitat Suitability under Current and Future Climate

Authors: Malay K. Pramanik

Abstract:

In recent years, climate change has become a major threat and has been widely documented in the geographic distribution of many plant species. However, the impacts of climate change on the distribution of ecologically vulnerable medicinal species remain largely unknown. The identification of a suitable habitat for a species under climate change scenario is a significant step towards the mitigation of biodiversity decline. The study, therefore, aims to predict the impact of current, and future climatic scenarios on the distribution of the threatened Garcinia indica across the northern Western Ghats using Maximum Entropy (MaxEnt) modelling. The future projections were made for the year 2050 and 2070 with all Representative Concentration Pathways (RCPs) scenario (2.6, 4.5, 6.0, and 8.5) using 56 species occurrence data, and 19 bioclimatic predictors from the BCC-CSM1.1 model of the Intergovernmental Panel for Climate Change’s (IPCC) 5th assessment. The bioclimatic variables were minimised to a smaller number of variables after a multicollinearity test, and their contributions were assessed using jackknife test. The AUC value of 0.956 ± 0.023 indicates that the model performs with excellent accuracy. The study identified that temperature seasonality (39.5 ± 3.1%), isothermality (19.2 ± 1.6%), and annual precipitation (12.7 ± 1.7%) would be the major influencing variables in the current and future distribution. The model predicted 10.5% (19318.7 sq. km) of the study area as moderately to very highly suitable, while 82.60% (151904 sq. km) of the study area was identified as ‘unsuitable’ or ‘very low suitable’. Our predictions of climate change impact on habitat suitability suggest that there will be a drastic reduction in the suitability by 5.29% and 5.69% under RCP 8.5 for 2050 and 2070, respectively. Finally, the results signify that the model might be an effective tool for biodiversity protection, ecosystem management, and species re-habitation planning under future climate change scenarios.

Keywords: Garcinia Indica, maximum entropy modelling, climate change, MaxEnt, Western Ghats, medicinal plants

Procedia PDF Downloads 157
619 The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks

Authors: Edward Holupka, John Rossman, Tye Morancy, Joseph Aronovitz, Irving Kaplan

Abstract:

A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates.

Keywords: prostate, deep neural network, seed implant, ultrasound

Procedia PDF Downloads 198
618 Autistic Traits and Multisensory Integration–Using a Size-Weight Illusion Paradigm

Authors: Man Wai Lei, Charles Mark Zaroff

Abstract:

Objective: A majority of studies suggest that people with Autism Spectrum Disorder (ASD) have multisensory integration deficits. However, normal and even supranormal multisensory integration abilities have also been reported. Additionally, little of this work has been undertaken utilizing a dimensional conceptualization of ASD; i.e., a broader autism phenotype. Utilizing methodology that controls for common potential confounds, the current study aimed to examine if deficits in multisensory integration are associated with ASD traits in a non-clinical population. The contribution of affective versus non-affective components of sensory hypersensitivity to multisensory integration was also examined. Methods: Participants were 147 undergraduate university students in Macau, a Special Administrative Region of China, of Chinese ethnicity, aged 16 to 21 (Mean age = 19.13; SD = 1.07). Participants completed the Autism-Spectrum Quotient, the Sensory Perception Quotient, and the Adolescent/Adult Sensory Profile, in order to measure ASD traits, non-affective, and affective aspects of sensory/perceptual hypersensitivity, respectively. In order to explore multisensory integration across visual and haptic domains, participants were asked to judge which one of two equally weighted, but different sized cylinders was heavier, as a means of detecting the presence of the size-weight illusion (SWI). Results: ASD trait level was significantly and negatively correlated with susceptibility to the SWI (p < 0.05); this correlation was not associated with either accuracy in weight discrimination or gender. Examining the top decile of the non-normally distributed SWI scores revealed a significant negative association with sensation avoiding, but not other aspects of effective or non-effective sensory hypersensitivity. Conclusion and Implications: Within the normal population, a greater degree of ASD traits is associated with a lower likelihood of multisensory integration; echoing was often found in individuals with a clinical diagnosis of ASD, and providing further evidence for the dimensional nature of this disorder. This tendency appears to be associated with dysphoric emotional reactions to sensory input.

Keywords: Autism Spectrum Disorder, dimensional, multisensory integration, size-weight illusion

Procedia PDF Downloads 482
617 Assessment of Zinc Content in Nuts by Atomic Absorption Spectrometry Method

Authors: Katarzyna Socha, Konrad Mielcarek, Grzegorz Kangowski, Renata Markiewicz-Zukowska, Anna Puscion-Jakubik, Jolanta Soroczynska, Maria H. Borawska

Abstract:

Nuts have high nutritional value. They are a good source of polyunsaturated fatty acids, dietary fiber, vitamins (B₁, B₆, E, K) and minerals: magnesium, selenium, zinc (Zn). Zn is an essential element for proper functioning and development of human organism. Due to antioxidant and anti-inflammatory properties, Zn has an influence on immunological and central nervous system. It also affects proper functioning of reproductive organs and has beneficial impact on the condition of skin, hair, and nails. The objective of this study was estimation of Zn content in edible nuts. The research material consisted of 10 types of nuts, 12 samples of each type: almonds, brazil nuts, cashews, hazelnuts, macadamia nuts, peanuts, pecans, pine nuts, pistachios, and walnuts. The samples of nuts were digested in concentrated nitric acid using microwave mineralizer (Berghof, Germany). The concentration of Zn was determined by flame atomic absorption spectrometry method with Zeeman background correction (Hitachi, Japan). The accuracy of the method was verified on certified reference material: Simulated Diet D. The statistical analysis was performed using Statistica v. 13.0 software. For comparison between the groups, t-Student test was used. The highest content of Zn was shown in pine nuts and cashews: 78.57 ± 21.9, 70.02 ± 10,2 mg/kg, respectively, significantly higher than in other types of nuts. The lowest content of Zn was found in macadamia nuts: 16.25 ± 4.1 mg/kg. The consumption of a standard 42-gram portion of almonds, brazil nuts, cashews, peanuts, pecans, and pine nuts covers the daily requirement for Zn above 15% of recommended daily allowances (RDA) for women, while in the case of men consumption all of the above types of nuts, except peanuts. Selected types of nuts can be a good source of Zn in the diet.

Keywords: atomic absorption spectrometry, microelement, nuts, zinc

Procedia PDF Downloads 196
616 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 249
615 Sound Analysis of Young Broilers Reared under Different Stocking Densities in Intensive Poultry Farming

Authors: Xiaoyang Zhao, Kaiying Wang

Abstract:

The choice of stocking density in poultry farming is a potential way for determining welfare level of poultry. However, it is difficult to measure stocking densities in poultry farming because of a lot of variables such as species, age and weight, feeding way, house structure and geographical location in different broiler houses. A method was proposed in this paper to measure the differences of young broilers reared under different stocking densities by sound analysis. Vocalisations of broilers were recorded and analysed under different stocking densities to identify the relationship between sounds and stocking densities. Recordings were made continuously for three-week-old chickens in order to evaluate the variation of sounds emitted by the animals at the beginning. The experimental trial was carried out in an indoor reared broiler farm; the audio recording procedures lasted for 5 days. Broilers were divided into 5 groups, stocking density treatments were 8/m², 10/m², 12/m² (96birds/pen), 14/m² and 16/m², all conditions including ventilation and feed conditions were kept same except from stocking densities in every group. The recordings and analysis of sounds of chickens were made noninvasively. Sound recordings were manually analysed and labelled using sound analysis software: GoldWave Digital Audio Editor. After sound acquisition process, the Mel Frequency Cepstrum Coefficients (MFCC) was extracted from sound data, and the Support Vector Machine (SVM) was used as an early detector and classifier. This preliminary study, conducted in an indoor reared broiler farm shows that this method can be used to classify sounds of chickens under different densities economically (only a cheap microphone and recorder can be used), the classification accuracy is 85.7%. This method can predict the optimum stocking density of broilers with the complement of animal welfare indicators, animal productive indicators and so on.

Keywords: broiler, stocking density, poultry farming, sound monitoring, Mel Frequency Cepstrum Coefficients (MFCC), Support Vector Machine (SVM)

Procedia PDF Downloads 162
614 A Study of NT-ProBNP and ETCO2 in Patients Presenting with Acute Dyspnoea

Authors: Dipti Chand, Riya Saboo

Abstract:

OBJECTIVES: Early and correct diagnosis may present a significant clinical challenge in diagnosis of patients presenting to Emergency Department with Acute Dyspnoea. The common cause of acute dyspnoea and respiratory distress in Emergency Department are Decompensated Heart Failure (HF), Chronic Obstructive Pulmonary Disease (COPD), Asthma, Pneumonia, Acute Respiratory Distress Syndrome (ARDS), Pulmonary Embolism (PE), and other causes like anaemia. The aim of the study was to measure NT-pro Brain Natriuretic Peptide (BNP) and exhaled End-Tidal Carbon dioxide (ETCO2) in patients presenting with dyspnoea. MATERIAL AND METHODS: This prospective, cross-sectional and observational study was performed at the Government Medical College and Hospital, Nagpur, between October 2019 and October 2021 in patients admitted to the Medicine Intensive Care Unit. Three groups of patients were compared: (1) HFrelated acute dyspnoea group (n = 52), (2) pulmonary (COPD/PE)-related acute dyspnoea group (n = 31) and (3) sepsis with ARDS-related dyspnoea group (n = 13). All patients underwent initial clinical examination with a recording of initial vital parameters along with on-admission ETCO2 measurement, NT-proBNP testing, arterial blood gas analysis, lung ultrasound examination, 2D echocardiography, chest X-rays, and other relevant diagnostic laboratory testing. RESULTS: 96 patients were included in the study. Median NT-proBNP was found to be high for the Heart Failure group (11,480 pg/ml), followed by the sepsis group (780 pg/ml), and pulmonary group had an Nt ProBNP of 231 pg/ml. The mean ETCO2 value was maximum in the pulmonary group (48.610 mmHg) followed by Heart Failure (31.51 mmHg) and the sepsis group (19.46 mmHg). The results were found to be statistically significant (P < 0.05). CONCLUSION: NT-proBNP has high diagnostic accuracy in differentiating acute HF-related dyspnoea from pulmonary (COPD and ARDS)-related acute dyspnoea. The higher levels of ETCO2 help in diagnosing patients with COPD.

Keywords: NT PRO BNP, ETCO2, dyspnoea, lung USG

Procedia PDF Downloads 76
613 Identification of Vehicle Dynamic Parameters by Using Optimized Exciting Trajectory on 3- DOF Parallel Manipulator

Authors: Di Yao, Gunther Prokop, Kay Buttner

Abstract:

Dynamic parameters, including the center of gravity, mass and inertia moments of vehicle, play an essential role in vehicle simulation, collision test and real-time control of vehicle active systems. To identify the important vehicle dynamic parameters, a systematic parameter identification procedure is studied in this work. In the first step of the procedure, a conceptual parallel manipulator (virtual test rig), which possesses three rotational degrees-of-freedom, is firstly proposed. To realize kinematic characteristics of the conceptual parallel manipulator, the kinematic analysis consists of inverse kinematic and singularity architecture is carried out. Based on the Euler's rotation equations for rigid body dynamics, the dynamic model of parallel manipulator and derivation of measurement matrix for parameter identification are presented subsequently. In order to reduce the sensitivity of parameter identification to measurement noise and other unexpected disturbances, a parameter optimization process of searching for optimal exciting trajectory of parallel manipulator is conducted in the following section. For this purpose, the 321-Euler-angles defined by parameterized finite-Fourier-series are primarily used to describe the general exciting trajectory of parallel manipulator. To minimize the condition number of measurement matrix for achieving better parameter identification accuracy, the unknown coefficients of parameterized finite-Fourier-series are estimated by employing an iterative algorithm based on MATLAB®. Meanwhile, the iterative algorithm will ensure the parallel manipulator still keeps in an achievable working status during the execution of optimal exciting trajectory. It is showed that the proposed procedure and methods in this work can effectively identify the vehicle dynamic parameters and could be an important application of parallel manipulator in the fields of parameter identification and test rig development.

Keywords: parameter identification, parallel manipulator, singularity architecture, dynamic modelling, exciting trajectory

Procedia PDF Downloads 266
612 Aerial Survey and 3D Scanning Technology Applied to the Survey of Cultural Heritage of Su-Paiwan, an Aboriginal Settlement, Taiwan

Authors: April Hueimin Lu, Liangj-Ju Yao, Jun-Tin Lin, Susan Siru Liu

Abstract:

This paper discusses the application of aerial survey technology and 3D laser scanning technology in the surveying and mapping work of the settlements and slate houses of the old Taiwanese aborigines. The relics of old Taiwanese aborigines with thousands of history are widely distributed in the deep mountains of Taiwan, with a vast area and inconvenient transportation. When constructing the basic data of cultural assets, it is necessary to apply new technology to carry out efficient and accurate settlement mapping work. In this paper, taking the old Paiwan as an example, the aerial survey of the settlement of about 5 hectares and the 3D laser scanning of a slate house were carried out. The obtained orthophoto image was used as an important basis for drawing the settlement map. This 3D landscape data of topography and buildings derived from the aerial survey is important for subsequent preservation planning as well as building 3D scan provides a more detailed record of architectural forms and materials. The 3D settlement data from the aerial survey can be further applied to the 3D virtual model and animation of the settlement for virtual presentation. The information from the 3D scanning of the slate house can also be used for further digital archives and data queries through network resources. The results of this study show that, in large-scale settlement surveys, aerial surveying technology is used to construct the topography of settlements with buildings and spatial information of landscape, as well as the application of 3D scanning for small-scale records of individual buildings. This application of 3D technology, greatly increasing the efficiency and accuracy of survey and mapping work of aboriginal settlements, is much helpful for further preservation planning and rejuvenation of aboriginal cultural heritage.

Keywords: aerial survey, 3D scanning, aboriginal settlement, settlement architecture cluster, ecological landscape area, old Paiwan settlements, slat house, photogrammetry, SfM, MVS), Point cloud, SIFT, DSM, 3D model

Procedia PDF Downloads 170
611 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 75
610 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 382
609 Arabic Lexicon Learning to Analyze Sentiment in Microblogs

Authors: Mahmoud B. Rokaya

Abstract:

The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.

Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation

Procedia PDF Downloads 189
608 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 256
607 Coronary Artery Calcium Score and Statin Treatment Effect on Myocardial Infarction and Major Adverse Cardiovascular Event of Atherosclerotic Cardiovascular Disease: A Systematic Review and Meta-Analysis

Authors: Yusra Pintaningrum, Ilma Fahira Basyir, Sony Hilal Wicaksono, Vito A. Damay

Abstract:

Background: Coronary artery calcium (CAC) scores play an important role in improving prognostic accuracy and can be selectively used to guide the allocation of statin therapy for atherosclerotic cardiovascular disease outcomes and potentially associated with the occurrence of MACE (Major Adverse Cardiovascular Event) and MI (Myocardial Infarction). Objective: This systematic review and meta-analysis aim to analyze the findings of a study about CAC Score and statin treatment effect on MI and MACE risk. Methods: Search for published scientific articles using the PRISMA (Preferred Reporting, Items for Systematic Reviews and Meta-Analysis) method conducted on PubMed, Cochrane Library, and Medline databases published in the last 20 years on “coronary artery calcium” AND “statin” AND “cardiovascular disease” Further systematic review and meta-analysis using RevMan version 5.4 were performed based on the included published scientific articles. Results: Based on 11 studies included with a total of 1055 participants, we performed a meta-analysis and found that individuals with CAC score > 0 increased risk ratio of MI 8.48 (RR = 9.48: 95% CI: 6.22 – 14.45) times and MACE 2.48 (RR = 3.48: 95% CI: 2.98 – 4.05) times higher than CAC score 0 individual. Statin compared against non-statin treatment showed a statistically insignificant overall effect on the risk of MI (P = 0.81) and MACE (P = 0.89) in an individual with elevated CAC score 1 – 100 (P = 0.65) and > 100 (P = 0.11). Conclusions: This study found that an elevated CAC scores individual has a higher risk of MI and MACE than a non-elevated CAC score individual. There is no significant effect of statin compared against non-statin treatment to reduce MI and MACE in elevated CAC score individuals of 1 – 100 or > 100.

Keywords: coronary artery calcium, statin, cardiovascular disease, myocardial infarction, MACE

Procedia PDF Downloads 101
606 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 127
605 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics

Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman

Abstract:

Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.

Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning

Procedia PDF Downloads 170
604 A Microsurgery-Specific End-Effector Equipped with a Bipolar Surgical Tool and Haptic Feedback

Authors: Hamidreza Hoshyarmanesh, Sanju Lama, Garnette R. Sutherland

Abstract:

In tele-operative robotic surgery, an ideal haptic device should be equipped with an intuitive and smooth end-effector to cover the surgeon’s hand/wrist degrees of freedom (DOF) and translate the hand joint motions to the end-effector of the remote manipulator with low effort and high level of comfort. This research introduces the design and development of a microsurgery-specific end-effector, a gimbal mechanism possessing 4 passive and 1 active DOFs, equipped with a bipolar forceps and haptic feedback. The robust gimbal structure is comprised of three light-weight links/joint, pitch, yaw, and roll, each consisting of low-friction support and a 2-channel accurate optical position sensor. The third link, which provides the tool roll, was specifically designed to grip the tool prongs and accommodate a low mass geared actuator together with a miniaturized capstan-rope mechanism. The actuator is able to generate delicate torques, using a threaded cylindrical capstan, to emulate the sense of pinch/coagulation during conventional microsurgery. While the tool left prong is fixed to the rolling link, the right prong bears a miniaturized drum sector with a large diameter to expand the force scale and resolution. The drum transmits the actuator output torque to the right prong and generates haptic force feedback at the tool level. The tool is also equipped with a hall-effect sensor and magnet bar installed vis-à-vis on the inner side of the two prongs to measure the tooltip distance and provide an analogue signal to the control system. We believe that such a haptic end-effector could significantly increase the accuracy of telerobotic surgery and help avoid high forces that are known to cause bleeding/injury.

Keywords: end-effector, force generation, haptic interface, robotic surgery, surgical tool, tele-operation

Procedia PDF Downloads 118
603 Predicting Returns Volatilities and Correlations of Stock Indices Using Multivariate Conditional Autoregressive Range and Return Models

Authors: Shay Kee Tan, Kok Haur Ng, Jennifer So-Kuen Chan

Abstract:

This paper extends the conditional autoregressive range (CARR) model to multivariate CARR (MCARR) model and further to the two-stage MCARR-return model to model and forecast volatilities, correlations and returns of multiple financial assets. The first stage model fits the scaled realised Parkinson volatility measures using individual series and their pairwise sums of indices to the MCARR model to obtain in-sample estimates and forecasts of volatilities for these individual and pairwise sum series. Then covariances are calculated to construct the fitted variance-covariance matrix of returns which are imputed into the stage-two return model to capture the heteroskedasticity of assets’ returns. We investigate different choices of mean functions to describe the volatility dynamics. Empirical applications are based on the Standard and Poor 500, Dow Jones Industrial Average and Dow Jones United States Financial Service Indices. Results show that the stage-one MCARR models using asymmetric mean functions give better in-sample model fits than those based on symmetric mean functions. They also provide better out-of-sample volatility forecasts than those using CARR models based on two robust loss functions with the scaled realised open-to-close volatility measure as the proxy for the unobserved true volatility. We also find that the stage-two return models with constant means and multivariate Student-t errors give better in-sample fits than the Baba, Engle, Kraft, and Kroner type of generalized autoregressive conditional heteroskedasticity (BEKK-GARCH) models. The estimates and forecasts of value-at-risk (VaR) and conditional VaR based on the best MCARR-return models for each asset are provided and tested using Kupiec test to confirm the accuracy of the VaR forecasts.

Keywords: range-based volatility, correlation, multivariate CARR-return model, value-at-risk, conditional value-at-risk

Procedia PDF Downloads 99
602 Surface Tension and Bulk Density of Ammonium Nitrate Solutions: A Molecular Dynamics Study

Authors: Sara Mosallanejad, Bogdan Z. Dlugogorski, Jeff Gore, Mohammednoor Altarawneh

Abstract:

Ammonium nitrate (NH­₄NO₃, AN) is commonly used as the main component of AN emulsion and fuel oil (ANFO) explosives, that use extensively in civilian and mining operations for underground development and tunneling applications. The emulsion formulation and wettability of AN prills, which affect the physical stability and detonation of ANFO, highly depend on the surface tension, density, viscosity of the used liquid. Therefore, for engineering applications of this material, the determination of density and surface tension of concentrated aqueous solutions of AN is essential. The molecular dynamics (MD) simulation method have been used to investigate the density and the surface tension of high concentrated ammonium nitrate solutions; up to its solubility limit in water. Non-polarisable models for water and ions have carried out the simulations, and the electronic continuum correction model (ECC) uses a scaling of the charges of the ions to apply the polarisation implicitly into the non-polarisable model. The results of calculated density and the surface tension of the solutions have been compared to available experimental values. Our MD simulations show that the non-polarisable model with full-charge ions overestimates the experimental results while the reduce-charge model for the ions fits very well with the experimental data. Ions in the solutions show repulsion from the interface using the non-polarisable force fields. However, when charges of the ions in the original model are scaled in line with the scaling factor of the ECC model, the ions create a double ionic layer near the interface by the migration of anions toward the interface while cations stay in the bulk of the solutions. Similar ions orientations near the interface were observed when polarisable models were used in simulations. In conclusion, applying the ECC model to the non-polarisable force field yields the density and surface tension of the AN solutions with high accuracy in comparison to the experimental measurements.

Keywords: ammonium nitrate, electronic continuum correction, non-polarisable force field, surface tension

Procedia PDF Downloads 231