Search results for: type i error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8471

Search results for: type i error

7721 The Impact of Motor Predispositions of Pilot-Cadets on Results in Aviation Synthetic Efficiency Test

Authors: Zbigniew Wochynski, Justyna Skrzynska, Robert Jedrys, Zdzislaw Kobos

Abstract:

The aim of the study is to determine the types of motor skills and their impact on achieving results while undergoing Aviation Synthetic Efficiency Test (ASET). The study involved 59 cadets, 21 years-old on average, who are studying on first year for a pilot. The average weight of the respondents is 73.8 kg. The subjects were divided into two groups by weight: up to 73.8 kg -group A (n-30) and above 73,8kg -group B (n-29). All subjects underwent the following tests: running at 40m, 100m, 1000m, 2000m, pull-ups, ASET. In both groups, the cadets were divided into two motor skills types taking into advance 40 m running, pull-ups, 2000 meters running and then subjected to do ASET. There has been shown statistically significant increase in group B in body height, weight and BMI with p <0.0003, p <0.0001, p <0.0001 compared to group A. The results indicate that the dominant motor type in all subjects is the endurance-strength model, which reached the speed V = 1,42m/s in overcoming ASET. This is confirmed by the correlation between 2000m and pull-ups r = 0.37 (p <0.05). In group A, the results indicate that the dominant type of motor is a high-speed-endurance model (26.6%), which reached speed V = 1,42m/s in overcoming ASET. In Group B, there was type of motor speed-strength (20.6%), which reached speed of V = 1.45m/s in overcoming ASET. This confirms the correlation between ASET and pull-ups r = 0.56 (p <0.005). Examined cadets who were having one dominant characteristic achieved worse results is ASET. The best results from all examined cadets in overcoming ASET had the type of motor endurance-strength, in group A endurance-speed model and in group B type of speed-strength

Keywords: ASET, Aviation Synthetic Efficiency Test, motor skills, physical tests, pilot-cadets

Procedia PDF Downloads 288
7720 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database

Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan

Abstract:

Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.

Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database

Procedia PDF Downloads 576
7719 DCT and Stream Ciphers for Improved Image Encryption Mechanism

Authors: T. R. Sharika, Ashwini Kumar, Kamal Bijlani

Abstract:

Encryption is the process of converting crucial information’s unreadable to unauthorized persons. Image security is an important type of encryption that secures all type of images from cryptanalysis. A stream cipher is a fast symmetric key algorithm which is used to convert plaintext to cipher text. In this paper we are proposing an image encryption algorithm with Discrete Cosine Transform and Stream Ciphers that can improve compression of images and enhanced security. The paper also explains the use of a shuffling algorithm for enhancing securing.

Keywords: decryption, DCT, encryption, RC4 cipher, stream cipher

Procedia PDF Downloads 361
7718 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 41
7717 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 117
7716 Type–2 Fuzzy Programming for Optimizing the Heat Rate of an Industrial Gas Turbine via Absorption Chiller Technology

Authors: T. Ganesan, M. S. Aris, I. Elamvazuthi, Momen Kamal Tageldeen

Abstract:

Terms set in power purchase agreements (PPA) challenge power utility companies in balancing between the returns (from maximizing power production) and securing long term supply contracts at capped production. The production limitation set in the PPA has driven efforts to maximize profits through efficient and economic power production. In this paper, a combined industrial-scale gas turbine (GT) - absorption chiller (AC) system is considered to cool the GT air intake for reducing the plant’s heat rate (HR). This GT-AC system is optimized while considering power output limitations imposed by the PPA. In addition, the proposed formulation accounts for uncertainties in the ambient temperature using Type-2 fuzzy programming. Using the enhanced chaotic differential evolution (CEDE), the Pareto frontier was constructed and the optimization results are analyzed in detail.

Keywords: absorption chillers (AC), turbine inlet air cooling (TIC), power purchase agreement (PPA), multiobjective optimization, type-2 fuzzy programming, chaotic differential evolution (CDDE)

Procedia PDF Downloads 310
7715 Heinz-Type Inequalities in Hilbert Spaces

Authors: Jin Liang, Guanghua Shi

Abstract:

In this paper, we are concerned with the further refinements of the Heinz operator inequalities in Hilbert spaces. Our purpose is to derive several new Heinz-type operator inequalities. First, with the help of the Taylor series of some hyperbolic functions, we obtain some refinements of the ordering relations among Heinz means defined by Bhatia with different parameters, which would be more suitable in obtaining the corresponding operator inequalities. Second, we present some generalizations of Heinz operator inequalities. Finally, we give a matrix version of the Heinz inequality for the Hilbert-Schmidt norm.

Keywords: Hilbert space, means inequality, norm inequality, positive linear operator

Procedia PDF Downloads 270
7714 The Omicron Variant BA.2.86.1 of SARS- 2 CoV-2 Demonstrates an Altered Interaction Network and Dynamic Features to Enhance the Interaction with the hACE2

Authors: Taimur Khan, Zakirullah, Muhammad Shahab

Abstract:

The SARS-CoV-2 variant BA.2.86 (Omicron) has emerged with unique mutations that may increase its transmission and infectivity. This study investigates how these mutations alter the Omicron receptor-binding domain's interaction network and dynamic properties (RBD) compared to the wild-type virus, focusing on its binding affinity to the human ACE2 (hACE2) receptor. Protein-protein docking and all-atom molecular dynamics simulations were used to analyze structural and dynamic differences. Despite the structural similarity to the wild-type virus, the Omicron variant exhibits a distinct interaction network involving new residues that enhance its binding capacity. The dynamic analysis reveals increased flexibility in the RBD, particularly in loop regions crucial for hACE2 interaction. Mutations significantly alter the secondary structure, leading to greater flexibility and conformational adaptability compared to the wild type. Binding free energy calculations confirm that the Omicron RBD has a higher binding affinity (-70.47 kcal/mol) to hACE2 than the wild-type RBD (-61.38 kcal/mol). These results suggest that the altered interaction network and enhanced dynamics of the Omicron variant contribute to its increased infectivity, providing insights for the development of targeted therapeutics and vaccines.

Keywords: SARS-CoV-2, molecular dynamic simulation, receptor binding domain, vaccine

Procedia PDF Downloads 21
7713 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers

Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus

Abstract:

Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.

Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.

Procedia PDF Downloads 555
7712 Cimifugin Inhibited Th2-Type Allergic Contact Dermatitis

Authors: Xiaoyan Jiang, Huizhu Wang, Lili Gui, Dandan Shen, Xiao Wei, Xi Yu, Hailiang Liu, Min Hong

Abstract:

Objective: Applicate FITC to establish Th2-type allergic contact dermatitis model, and study the effect and mechanism of Cimifugin on Th2-type allergic contact dermatitis. Methods: The Balb/c mice were sensitized with painting 80 ul of 1.5% FITC onto the shaved abdomen skin at DAY1 and DAY2. The animals were challenged on their right ears with 20 ul of 0.6% FITC, and the left ears were painted with solvent alone at day 6, mice were administered cimifugin for 7 days. 24h later, ear swelling was noted, and the infiltration of eosinophils was investigated by hematoxylin and eosin (H&E) staining. while part of the ear tissue homogenates prepared for detecting interleukin-4 levels by ELISA .Mice were administered cimifugin In the initial stage of the above model for 5 days(-1DAY—DAY3), ear tissue were homogenized to detect IL-33 levels by ELISA. Results: Cimifugin 25mg/kg, 50mg/kg inhibited mouse ear swelling, ear histopathology showed that mice given Cimifugin has significantly reduced levels of local tissue fluid exudation, congestion, infiltration of lymphocytes, and other inflammatory conditions compared with the model group. At the same time, it has significantly reduce of Th2 cytokines IL-4 in the mouse ear tissue homogenate. Data of the initial stage shows that 12.5mg/kg, 50mg/kg Cimifugin significantly inhibited IL-33 levels. Conclusion: Cimifugin inhibit FITC-induced Th2-type allergic contact dermatitis, and its mechanism may be related to inhibition of IL-33.

Keywords: cimifugin, allergic contact dermatitis, Th1/Th2, IL-33

Procedia PDF Downloads 479
7711 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 171
7710 Non-Differentiable Mond-Weir Type Symmetric Duality under Generalized Invexity

Authors: Jai Prakash Verma, Khushboo Verma

Abstract:

In the present paper, a pair of Mond-Weir type non-differentiable multiobjective second-order programming problems, involving two kernel functions, where each of the objective functions contains support function, is formulated. We prove weak, strong and converse duality theorem for the second-order symmetric dual programs under η-pseudoinvexity conditions.

Keywords: non-differentiable multiobjective programming, second-order symmetric duality, efficiency, support function, eta-pseudoinvexity

Procedia PDF Downloads 249
7709 Visfatin and Apelin Are New Interrelated Adipokines Playing Role in the Pathogenesis of Type 2 Diabetes Mellitus Associated Coronary Artery Disease in Postmenopausal Women

Authors: Hala O. El-Mesallamy, Salwa M. Suwailem, Mae M. Seleem

Abstract:

Visfatin and apelin are two new adipokines that recently gained a special interest in diabetes research. This study was conducted to study the interplay between these two adipokines and their correlation with other inflammatory and biochemical parameters in type 2 diabetic (T2D) postmenopausal women with CAD. Visfatin and apelin were measured by enzyme-linked immunoassay (ELISA). Visfatin was found to be significantly higher in the following groups: T2D patients without CAD, non-obese and obese T2D patients with CAD when compared to control group. Apelin was found to be significantly lower in non-obese and obese T2D patients with CAD when compared to control group. Visfatin and apelin were found to be significantly associated with each other and with other biochemical parameters. The current study provides evidence for the interplay between visfatin and apelin through the inflammatory milieu characteristic of T2D and their possible role in the pathogenesis of CAD complication of T2D.

Keywords: apelin, coronary artery disease, inflammation, type 2 diabetes, visfatin

Procedia PDF Downloads 252
7708 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 127
7707 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 266
7706 Subpixel Corner Detection for Monocular Camera Linear Model Research

Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao

Abstract:

Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.

Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection

Procedia PDF Downloads 277
7705 The Mirage of Progress? a Longitudinal Study of Japanese Students’ L2 Oral Grammar

Authors: Robert Long, Hiroaki Watanabe

Abstract:

This longitudinal study examines the grammatical errors of Japanese university students’ dialogues with a native speaker over an academic year. The L2 interactions of 15 Japanese speakers were taken from the JUSFC2018 corpus (April/May 2018) and the JUSFC2019 corpus (January/February). The corpora were based on a self-introduction monologue and a three-question dialogue; however, this study examines the grammatical accuracy found in the dialogues. Research questions focused on a possible significant difference in grammatical accuracy from the first interview session in 2018 and the second one the following year, specifically regarding errors in clauses per 100 words, global errors and local errors, and with specific errors related to parts of speech. The investigation also focused on which forms showed the least improvement or had worsened? Descriptive statistics showed that error-free clauses/errors per 100 words decreased slightly while clauses with errors/100 words increased by one clause. Global errors showed a significant decline, while local errors increased from 97 to 158 errors. For errors related to parts of speech, a t-test confirmed there was a significant difference between the two speech corpora with more error frequency occurring in the 2019 corpus. This data highlights the difficulty in having students self-edit themselves.

Keywords: clause analysis, global vs. local errors, grammatical accuracy, L2 output, longitudinal study

Procedia PDF Downloads 132
7704 Feasibility of Risk Assessment for Type 2 Diabetes in Community Pharmacies Using Two Different Approaches: A Pilot Study in Thailand

Authors: Thitaporn Thoopputra, Tipaporn Pongmesa, Shuchuen Li

Abstract:

Aims: To evaluate the application of non-invasive diabetes risk assessment tool in community pharmacy setting. Methods: Thai diabetes risk score was applied to assess individuals at risk of developing type 2 diabetes. Interactive computer-based risk screening (IT) and paper-based risk screening (PT) tools were applied. Participants aged over 25 years with no known diabetes were recruited in six participating pharmacies. Results: A total of 187 clients, mean aged (+SD) was 48.6 (+10.9) years. 35% were at high risk. The mean value of willingness-to-pay for the service fee in IT group was significantly higher than PT group (p=0.013). No significant difference observed for the satisfaction between groups. Conclusions: Non-invasive risk assessment tool, whether paper-based or computerized-based can be applied in community pharmacy to support the enhancing role of pharmacists in chronic disease management. Long term follow up is needed to determine the impact of its application in clinical, humanistic and economic outcomes.

Keywords: community pharmacy, intervention, prevention, risk assessment, type 2 diabetes

Procedia PDF Downloads 513
7703 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis

Authors: Renata Konadu

Abstract:

In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.

Keywords: electricity consumption, energy policy, GDP growth, vector error correction model

Procedia PDF Downloads 437
7702 Estimating Anthropometric Dimensions for Saudi Males Using Artificial Neural Networks

Authors: Waleed Basuliman

Abstract:

Anthropometric dimensions are considered one of the important factors when designing human-machine systems. In this study, the estimation of anthropometric dimensions has been improved by using Artificial Neural Network (ANN) model that is able to predict the anthropometric measurements of Saudi males in Riyadh City. A total of 1427 Saudi males aged 6 to 60 years participated in measuring 20 anthropometric dimensions. These anthropometric measurements are considered important for designing the work and life applications in Saudi Arabia. The data were collected during eight months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining 15 dimensions were set to be the measured variables (Model’s outcomes). The hidden layers varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was able to estimate the body dimensions of Saudi male population in Riyadh City. The network's mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found to be 0.0348 and 3.225, respectively. These results were found less, and then better, than the errors found in the literature. Finally, the accuracy of the developed neural network was evaluated by comparing the predicted outcomes with regression model. The ANN model showed higher coefficient of determination (R2) between the predicted and actual dimensions than the regression model.

Keywords: artificial neural network, anthropometric measurements, back-propagation

Procedia PDF Downloads 487
7701 The Comparison of the Effect of Mindfulness-Based Relaxation Training and Trans Cranial Electrical Stimulation and Their Combination on Decreasing Physiological Distress in Patients with Type-2 Diabetes

Authors: Gholam Hossein Javanmard, Roghayeh Mohammadi Garegozlo

Abstract:

The present study was a randomized three-group double-blind clinical trial with repeated measures designs which aimed to determine the pure effect and combined effect of mindfulness based-relaxation (MBR) technique and Transcranial Electrical Simulation (tCES) on psychological distress decreasing of patients with type-2 diabetes. The sample of the study consisted of 30 patients with type-2 diabetes who were selected from the Diabetes Association of Bonab city in Iran. The participants were matched and then randomly assigned to the three groups of 10 subjects (MBR, CES, MBR+CES). The subjects received interventions related to their group in 10 individual sessions. Pre-test, post-test, and one-month follow-up were conducted using DASS-42. Analysis of variance with repeated measures showed a significant change in psychological distress. Multivariate covariance analysis and the paired interpersonal comparative test of Ben Foruni indicated that both interventions of MBR and CES have a similar effect on psychological distress decreasing in the post-test and follow-up phase. But, the combined therapy of MBR+CES was more efficient, and it had a more stable effect. However, all three interventions, especially combined intervention of MBR+CES, as efficient and stable treatment, are suggested for improving the psychological status of diabetic patients.

Keywords: mindfulness based-relaxation, transcranial electrical simulation, type 2 diabetes, psychological distress

Procedia PDF Downloads 131
7700 Modeling of the Attitude Control Reaction Wheels of a Spacecraft in Software in the Loop Test Bed

Authors: Amr AbdelAzim Ali, G. A. Elsheikh, Moutaz M. Hegazy

Abstract:

Reaction wheels (RWs) are generally used as main actuator in the attitude control system (ACS) of spacecraft (SC) for fast orientation and high pointing accuracy. In order to achieve the required accuracy for the RWs model, the main characteristics of the RWs that necessitate analysis during the ACS design phase include: technical features, sequence of operating and RW control logic are included in function (behavior) model. A mathematical model is developed including the various errors source. The errors in control torque including relative, absolute, and error due to time delay. While the errors in angular velocity due to differences between average and real speed, resolution error, loose in installation of angular sensor, and synchronization errors. The friction torque is presented in the model include the different feature of friction phenomena: steady velocity friction, static friction and break-away torque, and frictional lag. The model response is compared with the experimental torque and frequency-response characteristics of tested RWs. Based on the created RW model, some criteria of optimization based control torque allocation problem can be recommended like: avoiding the zero speed crossing, bias angular velocity, or preventing wheel from running on the same angular velocity.

Keywords: friction torque, reaction wheels modeling, software in the loop, spacecraft attitude control

Procedia PDF Downloads 266
7699 Buckling Resistance of GFRP Sandwich Infill Panels with Different Cores under Increased Temperatures

Authors: WooYoung Jung, V. Sim

Abstract:

This paper presents numerical analysis in terms of buckling resistance strength of polymer matrix composite (PMC) infill panels system under the influence of temperature on the foam core. Failure mode under in-plane compression is investigated by means of numerical analysis with ABAQUS platform. Parameters considered in this study are contact length and both the type of foam for core and the variation of its Young's Modulus under the thermal influence. Variation of temperature is considered in static cases and only applied to core. Indeed, it is shown that the effect of temperature on the panel system mechanical properties is significance. Moreover, the variations of temperature result in the decrements of the system strength. This is due to the polymeric nature of this material. Additionally, the contact length also displays the effect on performance of infill panel. Their significance factors are based on type of polymer for core. Hence, by comparing difference type of core material, the variation can be reducing.

Keywords: buckling, contact length, foam core, temperature dependent

Procedia PDF Downloads 298
7698 A Homogeneous Catalytic System for Decolorization of a Mixture of Orange G Acid and Naphthol Blue-Black Dye Based on Hydrogen Peroxide and a Recyclable DAWSON Type Heteropolyanion

Authors: Ouahiba Bechiri, Mostefa Abbessi

Abstract:

The color removal from industrial effluents is a major concern in wastewater treatment. The main objective of this work was to study the decolorization of a mixture of Orange G acid (OG) and naphthol blue black dye (NBB) in aqueous solution by hydrogen peroxide using [H1,5Fe1,5P2W12Mo6O61,23H2O] as catalyst. [H1,5Fe1,5P2 W12Mo6O61,23H2O] is a recyclable DAWSON type heteropolyanion. Effects of various experimental parameters of the oxidation reaction of the dye were investigated. The studied parameters were: the initial pH, H2O2 concentration, the catalyst mass and the temperature. The optimum conditions had been determined, and it was found that efficiency of degradation obtained after 15 minutes of reaction was about 100%. The optimal parameters were: initial pH = 3; [H2O2]0 = 0.08 mM; catalyst mass = 0.05g; for a concentration of dyes = 30mg/L.

Keywords: Dawson type heteropolyanion, naphthol blue-black, dye degradation, orange G acid, oxidation, hydrogen peroxide

Procedia PDF Downloads 360
7697 Examining the Changes in Complexity, Accuracy, and Fluency in Japanese L2 Writing Over an Academic Semester

Authors: Robert Long

Abstract:

The results of a one-year study on the evolution of complexity, accuracy, and fluency (CAF) in the compositions of Japanese L2 university students throughout a semester are presented in this study. One goal was to determine if any improvement in writing abilities over this academic term had occurred, while another was to examine methods of editing. Participants had 30 minutes to write each essay with an additional 10 minutes allotted for editing. As for editing, participants were divided into two groups, one of which utilized an online grammar checker, while the other half self-edited their initial manuscripts. From the three different institutions, there was a total of 159 students. Research questions focused on determining if the CAF had evolved over the previous year, identifying potential variations in editing techniques, and describing the connections between the CAF dimensions. According to the findings, there was some improvement in accuracy (fewer errors) in all three of the measures), whereas there was a marked decline in complexity and fluency. As for the second research aim relating to the interaction among the three dimensions (CAF) and of possible increases in fluency being offset by decreases in grammatical accuracy, results showed (there is a logical high correlation with clauses and word counts, and mean length of T-unit (MLT) and (coordinate phrase of T-unit (CP/T) as well as MLT and clause per T-unit (C/T); furthermore, word counts and error/100 ratio correlated highly with error-free clause totals (EFCT). Issues of syntactical complexity had a negative correlation with EFCT, indicating that more syntactical complexity relates to decreased accuracy. Concerning a difference in error correction between those who self-edited and those who used an online grammar correction tool, results indicated that the variable of errors-free clause ratios (EFCR) had the greatest difference regarding accuracy, with fewer errors noted with writers using an online grammar checker. As for possible differences between the first and second (edited) drafts regarding CAF, results indicated there were positive changes in accuracy, the most significant change seen in complexity (CP/T and MLT), while there were relatively insignificant changes in fluency. Results also indicated significant differences among the three institutions, with Fujian University of Technology having the most fluency and accuracy. These findings suggest that to raise students' awareness of their overall writing development, teachers should support them in developing more complex syntactic structures, improving their fluency, and making more effective use of online grammar checkers.

Keywords: complexity, accuracy, fluency, writing

Procedia PDF Downloads 39
7696 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 149
7695 Robotic Logging Technology: The Future of Oil Well Logging

Authors: Nitin Lahkar, Rishiraj Goswami

Abstract:

“Oil Well Logging” or the practice of making a detailed record (a well log) of the geologic formations penetrated by a borehole is an important practice in the Oil and Gas industry. Although a lot of research has been undertaken in this field, some basic limitations still exist. One of the main arenas or venues where plethora of problems arises is in logistically challenged areas. Accessibility and availability of efficient manpower, resources and technology is very time consuming, restricted and often costly in these areas. So, in this regard, the main challenge is to decrease the Non Productive Time (NPT) involved in the conventional logging process. The thought for the solution to this problem has given rise to a revolutionary concept called the “Robotic Logging Technology”. Robotic logging technology promises the advent of successful logging in all kinds of wells and trajectories. It consists of a wireless logging tool controlled from the surface. This eliminates the need for the logging truck to be summoned which in turn saves precious rig time. The robotic logging tool here, is designed such that it can move inside the well by different proposed mechanisms and models listed in the full paper as TYPE A, TYPE B and TYPE C. These types are classified on the basis of their operational technology, movement and conditions/wells in which the tool is to be used. Thus, depending on subsurface conditions, energy sources available and convenience the TYPE of Robotic model will be selected. Advantages over Conventional Logging Techniques: Reduction in Non-Productive time, lesser energy requirements, very fast action as compared to all other forms of logging, can perform well in all kinds of well trajectories (vertical/horizontal/inclined).

Keywords: robotic logging technology, innovation, geology, geophysics

Procedia PDF Downloads 306
7694 Fabrication of a Potential Point-of-Care Device for Hemoglobin A1c: A Lateral Flow Immunosensor

Authors: Shu Hwang Ang, Choo Yee Yu, Geik Yong Ang, Yean Yean Chan, Yatimah Binti Alias, And Sook Mei Khor

Abstract:

With the high prevalence of Type 2 diabetes mellitus across the world, the morbidities and mortalities associated with Type 2 diabetes have significant impact on the production line for a nation. With routine scheduled clinical visits to manage Type 2 diabetes, diabetic patients with hectic lifestyles can have low clinical compliance. Hence, it often decreases the effectiveness of diabetic management personalized for each diabetic patient. Here, we report a useful developed point-of-care (POC) device that detect glycated hemoglobin (HbA1c, biomarker for long-term Type 2 diabetic management). In fact, the established POC devices certified to be used in clinical setting are not only expensive ($ 8 to $10 per test), they also require skillful practitioners to perform sampling and interpretation. As a paper-based biosensor, the developed HbA1c biosensor utilized lateral flow principle to offer an alternative for cost-effective (approximately $2 per test) and end-user friendly device for household testing. Requiring as little as 2 L of finger-picked blood, the test can be performed at the household with just simple dilution and washings. With visual interpretation of numbers of test lines shown on the developed biosensor, it can be interpreted as easy as a urine pregnancy test, aided with scale of intensity provided. In summary, the developed HbA1c immunosensor has been tested to have high selectivity towards HbA1c, and is stable with reasonably good performance in clinical testing. Therefore, our developed HbA1c immunosensor has high potential to be an effective diabetic management tool to increase patient compliance and thus contain the progression of the diabetes.

Keywords: blood, glycated hemoglobin (HbA1c), lateral flow, type 2 diabetes mellitus

Procedia PDF Downloads 528
7693 Enhanced Physiological Response of Blood Pressure and Improved Performance in Successive Divided Attention Test Seen with Classical Instrumental Background Music Compared to Controls

Authors: Shantala Herlekar

Abstract:

Introduction: Entrainment effect of music on cardiovascular parameters is well established. Music is being used in the background by medical students while studying. However, does it really help them relax faster and concentrate better? Objectives: This study was done to compare the effects of classical instrumental background music versus no music on blood pressure response over time and on successively performed divided attention test in Indian and Malaysian 1st-year medical students. Method: 60 Indian and 60 Malaysian first year medical students, with an equal number of girls and boys were randomized into two groups i.e music group and control group thus creating four subgroups. Three different forms of Symbol Digit Modality Test (to test concentration ability) were used as a pre-test, during music/control session and post-test. It was assessed using total, correct and error score. Simultaneously, multiple Blood Pressure recordings were taken as pre-test, during 1, 5, 15, 25 minutes during music/control (+SDMT) and post-test. The music group performed the test with classical instrumental background music while the control group performed it in silence. Results were analyzed using students paired t test. p value < 0.05 was taken as statistically significant. A drop in BP recording was indicative of relaxed state and a rise in BP with task performance was indicative of increased arousal. Results: In Symbol Digit Modality Test (SDMT) test, Music group showed significant better results for correct (p = 0.02) and total (p = 0.029) scores during post-test while errors reduced (p = 0.002). Indian music group showed decline in post-test error scores (p = 0.002). Malaysian music group performed significantly better in all categories. Blood pressure response was similar in music and control group with following variations, a drop in BP at 5minutes, being significant in music group (p < 0.001), a steep rise in values till 15minutes (corresponding to SDMT test) also being significant only in music group (p < 0.001) and the Systolic BP readings in controls during post-test were at lower levels compared to music group. On comparing the subgroups, not much difference was noticed in recordings of Indian student’s subgroups while all the paired-t test values in the Malaysian music group were significant. Conclusion: These recordings indicate an increased relaxed state with classical instrumental music and an increased arousal while performing a concentration task. Music used in our study was beneficial to students irrespective of their nationality and preference of music type. It can act as an “active coping” strategy and alleviate stress within a very short period of time, in our study within a span of 5minutes. When used in the background, during task performance, can increase arousal which helps the students perform better. Implications: Music can be used between lectures for a short time to relax the students and help them concentrate better for the subsequent classes, especially for late afternoon sessions.

Keywords: blood pressure, classical instrumental background music, ethnicity, symbol digit modality test

Procedia PDF Downloads 141
7692 The Study of Sintered Wick Structure of Heat Pipes with Excellent Heat Transfer Capabilities

Authors: Im-Nam Jang, Yong-Sik Ahn

Abstract:

In this study sintered wick was formed in a heat pipe through the process of sintering a mixture of copper powder with particle sizes of 100μm and 200μm, mixed with a pore-forming agent. The heat pipe's thermal resistance, which affects its heat transfer efficiency, is determined during manufacturing according to powder type, thickness of the sintered wick, and filling rate of the working fluid. Heat transfer efficiency was then tested at various inclination angles (0°, 45°, 90°) to evaluate the performance of heat pipes. Regardless of the filling amount and test angle, the 200μm copper powder type exhibited superior heat transfer efficiency compared to the 100μm type. After analyzing heat transfer performance at various filling rates between 20% and 50%, it was determined that the heat pipe's optimal heat transfer capability occurred at a working fluid filling rate of 30%. The width of the wick was directly related to the heat transfer performance.

Keywords: heat pipe, heat transfer performance, effective pore size, capillary force, sintered wick

Procedia PDF Downloads 64