Search results for: error estimate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3557

Search results for: error estimate

2777 Investment and Economic Growth: An Empirical Analysis for Tanzania

Authors: Manamba Epaphra

Abstract:

This paper analyzes the causal effect between domestic private investment, public investment, foreign direct investment and economic growth in Tanzania during the 1970-2014 period. The modified neo-classical growth model that includes control variables such as trade liberalization, life expectancy and macroeconomic stability proxied by inflation is used to estimate the impact of investment on economic growth. Also, the economic growth models based on Phetsavong and Ichihashi (2012), and Le and Suruga (2005) are used to estimate the crowding out effect of public investment on private domestic investment on one hand and foreign direct investment on the other hand. A correlation test is applied to check the correlation among independent variables, and the results show that there is very low correlation suggesting that multicollinearity is not a serious problem. Moreover, the diagnostic tests including RESET regression errors specification test, Breusch-Godfrey serial correlation LM test, Jacque-Bera-normality test and white heteroskedasticity test reveal that the model has no signs of misspecification and that, the residuals are serially uncorrelated, normally distributed and homoskedastic. Generally, the empirical results show that the domestic private investment plays an important role in economic growth in Tanzania. FDI also tends to affect growth positively, while control variables such as high population growth and inflation appear to harm economic growth. Results also reveal that control variables such as trade openness and life expectancy improvement tend to increase real GDP growth. Moreover, a revealed negative, albeit weak, association between public and private investment suggests that the positive effect of domestic private investment on economic growth reduces when public investment-to-GDP ratio exceeds 8-10 percent. Thus, there is a great need for promoting domestic saving so as to encourage domestic investment for economic growth.

Keywords: FDI, public investment, domestic private investment, crowding out effect, economic growth

Procedia PDF Downloads 272
2776 Biomass and Carbon Stock Estimates of Woodlands in the Southeastern Escarpment of Ethiopian Rift Valley: An Implication for Climate Change Mitigation

Authors: Sultan Haji Shube

Abstract:

Woodland ecosystems of semiarid rift valley of Ethiopia play a significant role in climate change mitigation by sequestering and storing more carbon. This study was conducted in Gidabo river sub-basins southeastern rift-valley escarpment of Ethiopian. It aims to estimate biomass and carbon stocks of woodlands and its implications for climate change mitigation. A total of 44 sampling plots (900m²each) were systematically laid in the woodland for vegetation and environmental data collection. A composite soil sample was taken from five locations main plot. Both disturbed and undisturbed soil samples were taken at two depths using soil auger and core-ring sampler, respectively. Allometric equation was used to estimate aboveground biomass while root-to-shoot ratio method and Walkley-Black method were used for belowground biomass and SOC, respectively. Result revealed that the totals of the study site was 17.05t/ha, of which 14.21t/ha was belonging for AGB and 2.84t/ha was for BGB. Moreover, 2224.7t/ha total carbon stocks was accumulated with an equivalent carbon dioxide of 8164.65t/ha. This study also revealed that more carbon was accumulated in the soil than the biomass. Both aboveground and belowground carbon stocks were decreased with increase in altitude while SOC stocks were increased. The AGC and BGC stocks were higher in the lower slope classes. SOC stocks were higher in the higher slope classes than in the lower slopes. Higher carbon stock was obtained from woody plants that had a DBH measure of >16cm and situated at plots facing northwest. Overall, study results will add up information about carbon stock potential of the woodland that will serve as a base line scenario for further research, policy makers and land managers.

Keywords: allometric equation, climate change mitigation, soil organic carbon, woodland

Procedia PDF Downloads 68
2775 Establishment and Validation of Correlation Equations to Estimate Volumetric Oxygen Mass Transfer Coefficient (KLa) from Process Parameters in Stirred-Tank Bioreactors Using Response Surface Methodology

Authors: Jantakan Jullawateelert, Korakod Haonoo, Sutipong Sananseang, Sarun Torpaiboon, Thanunthon Bowornsakulwong, Lalintip Hocharoen

Abstract:

Process scale-up is essential for the biological process to increase production capacity from bench-scale bioreactors to either pilot or commercial production. Scale-up based on constant volumetric oxygen mass transfer coefficient (KLa) is mostly used as a scale-up factor since oxygen supply is one of the key limiting factors for cell growth. However, to estimate KLa of culture vessels operated with different conditions are time-consuming since it is considerably influenced by a lot of factors. To overcome the issue, this study aimed to establish correlation equations of KLa and operating parameters in 0.5 L and 5 L bioreactor employed with pitched-blade impeller and gas sparger. Temperature, gas flow rate, agitation speed, and impeller position were selected as process parameters and equations were created using response surface methodology (RSM) based on central composite design (CCD). In addition, the effects of these parameters on KLa were also investigated. Based on RSM, second-order polynomial models for 0.5 L and 5 L bioreactor were obtained with an acceptable determination coefficient (R²) as 0.9736 and 0.9190, respectively. These models were validated, and experimental values showed differences less than 10% from the predicted values. Moreover, RSM revealed that gas flow rate is the most significant parameter while temperature and agitation speed were also found to greatly affect the KLa in both bioreactors. Nevertheless, impeller position was shown to influence KLa in only 5L system. To sum up, these modeled correlations can be used to accurately predict KLa within the specified range of process parameters of two different sizes of bioreactors for further scale-up application.

Keywords: response surface methodology, scale-up, stirred-tank bioreactor, volumetric oxygen mass transfer coefficient

Procedia PDF Downloads 188
2774 Evaluation of IMERG Performance at Estimating the Rainfall Properties through Convective and Stratiform Rain Events in a Semi-Arid Region of Mexico

Authors: Eric Muñoz de la Torre, Julián González Trinidad, Efrén González Ramírez

Abstract:

Rain varies greatly in its duration, intensity, and spatial coverage, it is important to have sub-daily rainfall data for various applications, including risk prevention. However, the ground measurements are limited by the low and irregular density of rain gauges. An alternative to this problem are the Satellite Precipitation Products (SPPs) that use passive microwave and infrared sensors to estimate rainfall, as IMERG, however, these SPPs have to be validated before their application. The aim of this study is to evaluate the performance of the IMERG: Integrated Multi-satellitE Retrievals for Global Precipitation Measurament final run V06B SPP in a semi-arid region of Mexico, using 4 automatic rain gauges (pluviographs) sub-daily data of October 2019 and June to September 2021, using the Minimum inter-event Time (MIT) criterion to separate unique rain events with a dry period of 10 hrs. for the purpose of evaluating the rainfall properties (depth, duration and intensity). Point to pixel analysis, continuous, categorical, and volumetric statistical metrics were used. Results show that IMERG is capable to estimate the rainfall depth with a slight overestimation but is unable to identify the real duration and intensity of the rain events, showing large overestimations and underestimations, respectively. The study zone presented 80 to 85 % of convective rain events, the rest were stratiform rain events, classified by the depth magnitude variation of IMERG pixels and pluviographs. IMERG showed poorer performance at detecting the first ones but had a good performance at estimating stratiform rain events that are originated by Cold Fronts.

Keywords: IMERG, rainfall, rain gauge, remote sensing, statistical evaluation

Procedia PDF Downloads 54
2773 Engineering Thermal-Hydraulic Simulator Based on Complex Simulation Suite “Virtual Unit of Nuclear Power Plant”

Authors: Evgeny Obraztsov, Ilya Kremnev, Vitaly Sokolov, Maksim Gavrilov, Evgeny Tretyakov, Vladimir Kukhtevich, Vladimir Bezlepkin

Abstract:

Over the last decade, a specific set of connected software tools and calculation codes has been gradually developed. It allows simulating I&C systems, thermal-hydraulic, neutron-physical and electrical processes in elements and systems at the Unit of NPP (initially with WWER (pressurized water reactor)). In 2012 it was called a complex simulation suite “Virtual Unit of NPP” (or CSS “VEB” for short). Proper application of this complex tool should result in a complex coupled mathematical computational model. And for a specific design of NPP, it is called the Virtual Power Unit (or VPU for short). VPU can be used for comprehensive modelling of a power unit operation, checking operator's functions on a virtual main control room, and modelling complicated scenarios for normal modes and accidents. In addition, CSS “VEB” contains a combination of thermal hydraulic codes: the best-estimate (two-liquid) calculation codes KORSAR and CORTES and a homogenous calculation code TPP. So to analyze a specific technological system one can build thermal-hydraulic simulation models with different detalization levels up to a nodalization scheme with real geometry. And the result at some points is similar to the notion “engineering/testing simulator” described by the European utility requirements (EUR) for LWR nuclear power plants. The paper is dedicated to description of the tools mentioned above and an example of the application of the engineering thermal-hydraulic simulator in analysis of the boron acid concentration in the primary coolant (changed by the make-up and boron control system).

Keywords: best-estimate code, complex simulation suite, engineering simulator, power plant, thermal hydraulic, VEB, virtual power unit

Procedia PDF Downloads 365
2772 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 126
2771 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 421
2770 Assessing the Accessibility to Primary Percutaneous Coronary Intervention

Authors: Tzu-Jung Tseng, Pei-Hsuen Han, Tsung-Hsueh Lu

Abstract:

Background: Ensuring patients with ST-elevation myocardial infarction (STEMI) access to hospitals that could perform percutaneous coronary intervention (PCI) in time is an important concern of healthcare managers. One commonly used the method to assess the coverage of population access to PCI hospital is the use GIS-estimated linear distance (crow's fly distance) between the district centroid and the nearest PCI hospital. If the distance is within a given distance (such as 20 km), the entire population of that district is considered to have appropriate access to PCI. The premise of using district centroid to estimate the coverage of population resident in that district is that the people live in the district are evenly distributed. In reality, the population density is not evenly distributed within the administrative district, especially in rural districts. Fortunately, the Taiwan government released basic statistical area (on average 450 population within the area) recently, which provide us an opportunity to estimate the coverage of population access to PCI services more accurate. Objectives: We aimed in this study to compare the population covered by a give PCI hospital according to traditional administrative district versus basic statistical area. We further examined if the differences between two geographic units used would be larger in a rural area than in urban area. Method: We selected two hospitals in Tainan City for this analysis. Hospital A is in urban area, hospital B is in rural area. The population in each traditional administrative district and basic statistical area are obtained from Taiwan National Geographic Information System, Ministry of Internal Affairs. Results: Estimated population live within 20 km of hospital A and B was 1,515,846 and 323,472 according to traditional administrative district and was 1,506,325 and 428,556 according to basic statistical area. Conclusion: In urban area, the estimated access population to PCI services was similar between two geographic units. However, in rural areas, the access population would be overestimated.

Keywords: accessibility, basic statistical area, modifiable areal unit problem (MAUP), percutaneous coronary intervention (PCI)

Procedia PDF Downloads 445
2769 A Comparative Analysis of the Private and Social Benefit-Cost Ratios of Organic and Inorganic Rice Farming: Case Study of Smallholder Farmers in the Aveyime Community, Ghana

Authors: Jerome E. Abiemo, Takeshi Mizunoya

Abstract:

The Aveyime community in the Volta region of Ghana is one of the major hubs for rice production. In the past, rice farmers applied organic pesticides to control pests, and compost as a soil amendment to improve fertility and productivity. However, the introduction of chemical pesticides and fertilizers have led many farmers to convert to inorganic system of rice production, without considering the social costs (e.g. groundwater contamination and health costs) related to the use of pesticides. The study estimates and compares the private and social BCRs of organic and inorganic systems of rice production. Both stratified and simple random sampling techniques were employed to select 300 organic and inorganic rice farmers and 50 pesticide applicators. The respondents were interviewed with pre-tested questionnaires. The Contingent Valuation Method (CVM) which elucidates organic farmers` Willingness-to-Pay (WTP) was employed to estimate the cost of groundwater contamination. The Cost of Illness (COI) analysis was used to estimate the health cost of pesticide-induced poisoning of applicators. The data collated, was analyzed with the aid of Microsoft excel. The study found that high private benefit (e.g. increase in farm yield and income) was the most influential factor for the rapid adoption of pesticides among rice farmers. The study also shows that the social costs of inorganic rice production were high. As such the social BCR of inorganic farming (0.2) was low as compared to organic farming (0.7). Based on the results, it was recommended that government should impose pesticide environmental tax, review current agricultural policies to favour organic farming and promote extension education to farmers on pesticide risk, to ensure agricultural and environmental sustainability.

Keywords: benefit-cost-ratio (BCR), inorganic farming, pesticides, social cost

Procedia PDF Downloads 463
2768 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 404
2767 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 163
2766 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser

Authors: Edoardo Schinco

Abstract:

Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.

Keywords: Althusser, enlightenment, Gramsci, ideology

Procedia PDF Downloads 182
2765 Using Equipment Telemetry Data for Condition-Based maintenance decisions

Authors: John Q. Todd

Abstract:

Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.

Keywords: condition based maintenance, equipment data, metrics, alerts

Procedia PDF Downloads 169
2764 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 109
2763 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)

Procedia PDF Downloads 591
2762 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model

Authors: Fu Jia

Abstract:

The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.

Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping

Procedia PDF Downloads 247
2761 Development of a Model for Predicting Radiological Risks in Interventional Cardiology

Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.

Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose

Procedia PDF Downloads 125
2760 A New Nonlinear State-Space Model and Its Application

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this work, a new nonlinear model will be introduced. The model is in the state-space form. The nonlinearity of this model is in the state equation where the state vector is multiplied by its self. This technique makes our model generalizes many famous models as Lotka-Volterra model and Lorenz model which have many applications in the real life. We will apply our new model to estimate the wind speed by using a new nonlinear estimator which suitable to work with our model.

Keywords: nonlinear systems, state-space model, Kronecker product, nonlinear estimator

Procedia PDF Downloads 675
2759 One vs. Rest and Error Correcting Output Codes Principled Rebalancing Schemes for Solving Imbalanced Multiclass Problems

Authors: Alvaro Callejas-Ramos, Lorena Alvarez-Perez, Alexander Benitez-Buenache, Anibal R. Figueiras-Vidal

Abstract:

This contribution presents a promising formulation which allows to extend the principled binary rebalancing procedures, also known as neutral re-balancing mechanisms in the sense that they do not alter the likelihood ratio

Keywords: Bregman divergences, imbalanced multiclass classifi-cation, informed re-balancing, invariant likelihood ratio

Procedia PDF Downloads 198
2758 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 151
2757 Estimation of Service Quality and Its Impact on Market Share Using Business Analytics

Authors: Haritha Saranga

Abstract:

Service quality has become an important driver of competition in manufacturing industries of late, as many products are being sold in conjunction with service offerings. With increase in computational power and data capture capabilities, it has become possible to analyze and estimate various aspects of service quality at the granular level and determine their impact on business performance. In the current study context, dealer level, model-wise warranty data from one of the top two-wheeler manufacturers in India is used to estimate service quality of individual dealers and its impact on warranty related costs and sales performance. We collected primary data on warranty costs, number of complaints, monthly sales, type of quality upgrades, etc. from the two-wheeler automaker. In addition, we gathered secondary data on various regions in India, such as petrol and diesel prices, geographic and climatic conditions of various regions where the dealers are located, to control for customer usage patterns. We analyze this primary and secondary data with the help of a variety of analytics tools such as Auto-Regressive Integrated Moving Average (ARIMA), Seasonal ARIMA and ARIMAX. Study results, after controlling for a variety of factors, such as size, age, region of the dealership, and customer usage pattern, show that service quality does influence sales of the products in a significant manner. A more nuanced analysis reveals the dynamics between product quality and service quality, and how their interaction affects sales performance in the Indian two-wheeler industry context. We also provide various managerial insights using descriptive analytics and build a model that can provide sales projections using a variety of forecasting techniques.

Keywords: service quality, product quality, automobile industry, business analytics, auto-regressive integrated moving average

Procedia PDF Downloads 111
2756 Modified Lot Quality Assurance Sampling (LQAS) Model for Quality Assessment of Malaria Parasite Microscopy and Rapid Diagnostic Tests in Kano, Nigeria

Authors: F. Sarkinfada, Dabo N. Tukur, Abbas A. Muaz, Adamu A. Yahuza

Abstract:

Appropriate Quality Assurance (QA) of parasite-based diagnosis of malaria to justify Artemisinin-based Combination Therapy (ACT) is essential for Malaria Programmes. In Low and Middle Income Countries (LMIC), resource constrain appears to be a major challenge in implementing the conventional QA system. We designed and implemented a modified LQAS model for QA of malaria parasite (MP) microscopy and RDT in a State Specialist Hospital (SSH) and a University Health Clinic (UHC) in Kano, Nigeria. The capacities of both facilities for MP microscopy and RDT were assessed before implementing a modified LQAS over a period of 3 months. Quality indicators comprising the qualities of blood film and staining, MP positivity rates, concordance rates, error rates (in terms of false positives and false negatives), sensitivity and specificity were monitored and evaluated. Seventy one percent (71%) of the basic requirements for malaria microscopy was available in both facilities, with the absence of certifies microscopists, SOPs and Quality Assurance mechanisms. A daily average of 16 to 32 blood samples were tested with a blood film staining quality of >70% recorded in both facilities. Using microscopy, the MP positivity rates were 50.46% and 19.44% in SSH and UHS respectively, while the MP positivity rates were 45.83% and 22.78% in SSH and UHS when RDT was used. Higher concordance rates of 88.90% and 93.98% were recorded in SSH and UHC respectively using microscopy, while lower rates of 74.07% and 80.58% in SSH and UHC were recorded when RDT was used. In both facilities, error rates were higher when RDT was used than with microscopy. Sensitivity and specificity were higher when microscopy was used (95% and 84% in SSH; 94% in UHC) than when RDT was used (72% and 76% in SSH; 78% and 81% in UHC). It could be feasible to implement an integrated QA model for MP microscopy and RDT using modified LQAS in Malaria Control Programmes in Low and Middle Income Countries that might have resource constrain for parasite-base diagnosis of malaria to justify ACT treatment.

Keywords: malaria, microscopy, quality assurance, RDT

Procedia PDF Downloads 210
2755 Sustainable Refrigerated Transport Engineering

Authors: A. A, F. Belmir, A. El Bouari, Y. Abboud

Abstract:

This article presents a study of the thermal performance of a new solar mobile refrigeration prototype for the preservation of perishable foods. The simulation of the refrigeration cycle and the calculation of the thermal balances made it possible to estimate its consumption and to evaluate the capacity of each photovoltaic component necessary for the production of energy. The study provides a description of the refrigerator construction and operation, including an energy balance analysis of the refrigerator performance under typical loads. The photovoltaic system requirements are also detailed.

Keywords: composite, material, photovoltaic, refrigeration, thermal

Procedia PDF Downloads 227
2754 Modeling and Temperature Control of Water-cooled PEMFC System Using Intelligent Algorithm

Authors: Chen Jun-Hong, He Pu, Tao Wen-Quan

Abstract:

Proton exchange membrane fuel cell (PEMFC) is the most promising future energy source owing to its low operating temperature, high energy efficiency, high power density, and environmental friendliness. In this paper, a comprehensive PEMFC system control-oriented model is developed in the Matlab/Simulink environment, which includes the hydrogen supply subsystem, air supply subsystem, and thermal management subsystem. Besides, Improved Artificial Bee Colony (IABC) is used in the parameter identification of PEMFC semi-empirical equations, making the maximum relative error between simulation data and the experimental data less than 0.4%. Operation temperature is essential for PEMFC, both high and low temperatures are disadvantageous. In the thermal management subsystem, water pump and fan are both controlled with the PID controller to maintain the appreciate operation temperature of PEMFC for the requirements of safe and efficient operation. To improve the control effect further, fuzzy control is introduced to optimize the PID controller of the pump, and the Radial Basis Function (RBF) neural network is introduced to optimize the PID controller of the fan. The results demonstrate that Fuzzy-PID and RBF-PID can achieve a better control effect with 22.66% decrease in Integral Absolute Error Criterion (IAE) of T_st (Temperature of PEMFC) and 77.56% decrease in IAE of T_in (Temperature of inlet cooling water) compared with traditional PID. In the end, a novel thermal management structure is proposed, which uses the cooling air passing through the main radiator to continue cooling the secondary radiator. In this thermal management structure, the parasitic power dissipation can be reduced by 69.94%, and the control effect can be improved with a 52.88% decrease in IAE of T_in under the same controller.

Keywords: PEMFC system, parameter identification, temperature control, Fuzzy-PID, RBF-PID, parasitic power

Procedia PDF Downloads 70
2753 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School

Authors: Shofiayuningtyas Luftiani

Abstract:

Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.

Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis

Procedia PDF Downloads 171
2752 Long-Term Otitis Media with Effusion and Related Hearing Loss and Its Impact on Developmental Outcomes

Authors: Aleema Rahman

Abstract:

Introduction: This study aims to estimate the prevalence of long-term otitis media with effusion (OME) and hearing loss in a prospective longitudinal cohort studyand to study the relationship between the condition and educational and psychosocial outcomes. Methods: Analysis of data from the Avon Longitudinal Study of Parents and Children (ALSPAC) will be undertaken. ALSPAC is a longitudinal birth cohort study carried out in the UK, which has collected detailed measures of hearing on ~7000 children from the age of seven. A descriptive analysis of the data will be undertaken to estimate the prevalence of OME and hearing loss (defined as having average hearing levels > 20dB and type B tympanogram) at 7, 9, 11, and 15 years as well as that of long-term OME and hearing loss. Logistic and linear regression analyses will be conducted to examine associations between long-term OME and hearing loss and educational outcomes (grades obtained from standardised national attainment tests) and psychosocial outcomes such as anxiety, social fears, and depression at ages 10-11 and 15-16 years. Results: Results will be presented in terms of the prevalence of OME and hearing loss in the population at each age. The prevalence of long-term OME and hearing loss, defined as having OME and hearing loss at two or more time points, will also be reported. Furthermore, any associations between long-term OME and hearing loss and the educational and psychosocial outcomes will be presented. Analyses will take into account demographic factors such as sex and social deprivation and relevant confounders, including socioeconomic status, ethnicity, and IQ. Discussion: Findings from this study will provide new epidemiological information on the prevalence of long-term OME and hearing loss. The research will provide new knowledge on the impact of OME for the small group of children who do not grow out of condition by age 7 but continue to have hearing loss and need clinical care through later childhood. The study could have clinical implications and may influence service delivery for this group of children.

Keywords: educational attainment, hearing loss, otitis media with effusion, psychosocial development

Procedia PDF Downloads 124
2751 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 20
2750 Design and Simulation of an Inter-Satellite Optical Wireless Communication System Using Diversity Techniques

Authors: Sridhar Rapuru, D. Mallikarjunreddy, Rajanarendra Sai

Abstract:

In this reign of the internet, the access of any multimedia file to the users at any time with a superior quality is needed. To achieve this goal, it is very important to have a good network without any interruptions between the satellites along with various earth stations. For that purpose, a high speed inter-satellite optical wireless communication system (IsOWC) is designed with space and polarization diversity techniques. IsOWC offers a high bandwidth, small size, less power requirement and affordable when compared with the present microwave satellite systems. To improve the efficiency and to reduce the propagation delay, inter-satellite link is established between the satellites. High accurate tracking systems are required to establish the reliable connection between the satellites as they have their own orbits. The only disadvantage of this IsOWC system is laser beam width is narrower than the RF because of this highly accurate tracking system to meet this requirement. The satellite uses the 'ephemerides data' for rough pointing and tracking system for fine pointing to the other satellite. In this proposed IsOWC system, laser light is used as a wireless connectedness between the source and destination and free space acts as the channel to carry the message. The proposed system will be designed, simulated and analyzed for 6000km with an improvement of data rate over previously existing systems. The performance parameters of the system are Q-factor, eye opening, bit error rate, etc., The proposed system for Inter-satellite Optical Wireless Communication System Design Using Diversity Techniques finds huge scope of applications in future generation communication purposes.

Keywords: inter-satellite optical wireless system, space and polarization diversity techniques, line of sight, bit error rate, Q-factor

Procedia PDF Downloads 251
2749 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes

Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi

Abstract:

The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.

Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees

Procedia PDF Downloads 130
2748 Wearable Jacket for Game-Based Post-Stroke Arm Rehabilitation

Authors: A. Raj Kumar, A. Okunseinde, P. Raghavan, V. Kapila

Abstract:

Stroke is the leading cause of adult disability worldwide. With recent advances in immediate post-stroke care, there is an increasing number of young stroke survivors, under the age of 65 years. While most stroke survivors will regain the ability to walk, they often experience long-term arm and hand motor impairments. Long term upper limb rehabilitation is needed to restore movement and function, and prevent deterioration from complications such as learned non-use and learned bad-use. We have developed a novel virtual coach, a wearable instrumented rehabilitation jacket, to motivate individuals to participate in long-term skill re-learning, that can be personalized to their impairment profile. The jacket can estimate the movements of an individual’s arms using embedded off-the-shelf sensors (e.g., 9-DOF IMU for inertial measurements, flex-sensors for measuring angular orientation of fingers) and a Bluetooth Low Energy (BLE) powered microcontroller (e.g., RFduino) to non-intrusively extract data. The 9-DOF IMU sensors contain 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer to compute the quaternions, which are transmitted to a computer to compute the Euler angles and estimate the angular orientation of the arms. The data are used in a gaming environment to provide visual, and/or haptic feedback for goal-based, augmented-reality training to facilitate re-learning in a cost-effective, evidence-based manner. The full paper will elaborate the technical aspects of communication, interactive gaming environment, and physical aspects of electronics necessary to achieve our stated goal. Moreover, the paper will suggest methods to utilize the proposed system as a cheaper, portable, and versatile system vis-à-vis existing instrumentation to facilitate post-stroke personalized arm rehabilitation.

Keywords: feedback, gaming, Euler angles, rehabilitation, augmented reality

Procedia PDF Downloads 270