Search results for: type I error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8320

Search results for: type I error

7450 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain

Authors: Jia Zhang, Fengmei Yao, Yanjing Tan

Abstract:

The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.

Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain

Procedia PDF Downloads 347
7449 Clinical Profile and Outcome of Type I Diabetes Mellitus at a Tertiary Care-Centre in Eastern Nepal

Authors: Gauri Shankar Shah

Abstract:

Objectives: The Type I diabetes mellitus in children is frequently a missed diagnosis and children presents in emergency with diabetic ketoacidosis having significant morbidity and mortality. The present study was done to find out the clinical presentation and outcome at a tertiary-care centre. Methods: This was retrospective analysis of data of Type I diabetes mellitus reporting to our centre during last one year (2012-2013). Results: There were 12 patients (8 males) and the age group was 4-14 years (mean ± 3.7). The presenting symptoms were fever, vomiting, altered sensorium and fast breathing in 8 (66.6%), 6 (50%), 4 (33.3%), and 4 (33.3%) cases, respectively. The classical triad of polyuria, polydypsia, and polyphagia were present only in two patients (33.2%). Seizures and epigastric pain were found in two cases each (33.2%). The four cases (33.3%) presented with diabetic ketoacidosis due to discontinuation of insulin doses, while 2 had hyperglycemia alone. The hemogram revealed mean hemoglobin of 12.1± 1.6 g/dL and total leukocyte count was 22,883.3 ± 10,345.9 per mm3, with polymorphs percentage of 73.1 ± 9.0%. The mean blood sugar at presentation was 740 ± 277 mg/ dl (544–1240). HbA1c ranged between 7.1-8.8 with mean of 8.1±0.6 %. The mean sodium, potassium, blood ph, pCO2, pO2 and bicarbonate were 140.8 ± 6.9 mEq/L, 4.4 ± 1.8mEq/L, 7.0 ± 0.2, 20.2 ± 10.8 mmHg, 112.6 ± 46.5 mmHg and 9.2 ± 8.8 mEq/L, respectively. All the patients were managed in pediatric intensive care unit as per our protocol, recovered and discharged on intermediate insulin given twice daily. Conclusions: Thus, it shows that these patients have uncontrolled hyperglycemia and often presents in emergency with ketoacidosis and deranged biochemical profile. The regular administration of insulin, frequent monitoring of blood sugar and health education are required to have better metabolic control and good quality of life.

Keywords: type I diabetes mellitus, hyperglycemia, outcome, glycemic control

Procedia PDF Downloads 242
7448 On the Fractional Integration of Generalized Mittag-Leffler Type Functions

Authors: Christian Lavault

Abstract:

In this paper, the generalized fractional integral operators of two generalized Mittag-Leffler type functions are investigated. The special cases of interest involve the generalized M-series and K-function, both introduced by Sharma. The two pairs of theorems established herein generalize recent results about left- and right-sided generalized fractional integration operators applied here to the M-series and the K-function. The note also results in important applications in physics and mathematical engineering.

Keywords: Fox–Wright Psi function, generalized hypergeometric function, generalized Riemann– Liouville and Erdélyi–Kober fractional integral operators, Saigo's generalized fractional calculus, Sharma's M-series and K-function

Procedia PDF Downloads 424
7447 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite

Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy

Abstract:

This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite

Procedia PDF Downloads 135
7446 Key Roles of the N-Type Oxide Layer in Hybrid Perovskite Solar Cells

Authors: Thierry Pauporté

Abstract:

Wide bandgap n-type oxide layers (TiO2, SnO2, ZnO etc.) play key roles in perovskite solar cells. They act as electron transport layers, and they permit the charge separation. They are also the substrate for the preparation of perovskite in the direct architecture. Therefore, they have a strong influence on the perovskite loading, its crystallinity and they can induce a degradation phenomenon upon annealing. The interface between the oxide and the perovskite is important, and the quality of this heterointerface must be optimized to limit the recombination of charges phenomena and performance losses. One can also play on the oxide and use two oxide contact layers for improving the device stability and durability. These aspects will be developed and illustrated on the basis of recent results obtained at Chimie-ParisTech.

Keywords: oxide, hybrid perovskite, solar cells, impedance

Procedia PDF Downloads 301
7445 Unified Power Quality Conditioner Presentation and Dimensioning

Authors: Abderrahmane Kechich, Othmane Abdelkhalek

Abstract:

Static converters behave as nonlinear loads that inject harmonic currents into the grid and increase the consumption of the inactive power. On the other hand, the increased use of sensitive equipment requires the application of sinusoidal voltages. As a result, the electrical power quality control has become a major concern in the field of power electronics. In this context, the active power conditioner (UPQC) was developed. It combines both serial and parallel structures; the series filter can protect sensitive loads and compensate for voltage disturbances such as voltage harmonics, voltage dips or flicker when the shunt filter compensates for current disturbances such as current harmonics, reactive currents and imbalance. This double feature is that it is one of the most appropriate devices. Calculating parameters is an important step and in the same time it’s not easy for that reason several researchers based on trial and error method for calculating parameters but this method is not easy for beginners researchers especially what about the controller’s parameters, for that reason this paper gives a mathematical way to calculate of almost all of UPQC parameters away from trial and error method. This paper gives also a new approach for calculating of PI regulators parameters for purpose to have a stable UPQC able to compensate for disturbances acting on the waveform of line voltage and load current in order to improve the electrical power quality.

Keywords: UPQC, Shunt active filer, series active filer, PI controller, PWM control, dual-loop control

Procedia PDF Downloads 386
7444 Determination of Direct Solar Radiation Using Atmospheric Physics Models

Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote

Abstract:

This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.

Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition

Procedia PDF Downloads 395
7443 Evaluation of Organizational Culture and Its Effects on Innovation in the IT Sector: A Case Study from UAE

Authors: Amir M. Shikhli, Refaat H. Abdel-Razek, Salaheddine Bendak

Abstract:

Innovation is considered to be one of the key factors that influence long-term success of any company. The problem of many organizations in developing countries is trying to implement innovation without a strong basis within the organizational culture to support it. The objective of this study is to assess the effects of organizational culture on innovation in one of the biggest information technology organizations in UAE, Injazat Data System. First, an Organizational Culture Assessment Instrument (OCAI) was used as a survey and Competing Value Framework as a model to analyze the existing culture within the organization and determine its characteristics. Following that, a modified version of the Community Innovation Survey (CIS) was used to determine innovation types introduced by the organization. Then multiple linear regression analysis was used to find out the effects of existing organizational culture on innovation. Results show that existing organizational culture is composed of a combination of Hierarchy (29.4%), Clan (25.8%), Market (24.9%) and Adhocracy (19.9%). Results of the second survey show that the organization focuses on organizational innovation (26.8%) followed by market and product innovations (25.6%) and finally process innovation (22.0%). Regression analysis results reveal that for each innovation type there is a recommended combination of the four culture types. For product innovation, the combination is 47.4% Clan, 17.9% Adhocracy, 1.0% Market and 33.3% Hierarchy; for process innovation it is 19.7% Clan, 45.2% Adhocracy, 32.0% Market and 3.1% Hierarchy; for organizational innovation the combination is 5.4% Clan, 32.7% Adhocracy, 6.0% Market and 55.9% Hierarchy; and for market innovation it is 25.5% Clan, 42.6% Adhocracy, 32.6% Market and 8.4% Hierarchy. Based on these recommended combinations, this study suggests two ways to enhance the innovation culture in the organization. First, if the management decides on the innovation type to be enhanced, a comparison between the existing culture and the recommended combination of selected innovation types will lead to difference in percentages of each culture type. Then further analysis should show how to modify the existing culture to match the recommended combination. Second, if the innovation type is not selected, but the management wants to enhance innovation culture in the organization, the difference in percentages of each culture type will lead to finding out the recommended combination of culture types that gives the narrowest gap between existing culture and recommended combination.

Keywords: developing countries, organizational culture, innovation types, product innovation, process innovation, organizational innovation, marketing innovation

Procedia PDF Downloads 257
7442 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (Rsm)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarse-aggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: mix proportioning, response surface methodology, compressive strength, optimal design

Procedia PDF Downloads 249
7441 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 128
7440 Self-Management among the Ethnic Groups with Type 2 Diabetes Mellitus in Thailand

Authors: Siwarak Kitchanapaibul, Warren Gillibrand, Rob Burton

Abstract:

The prevalence of diabetes mellitus has been rising all over the world. Self-management is required for diabetes mellitus patients. The objective of this study is to explore the self-management among the ethnic groups with type 2 diabetes mellitus in Thailand, an upper middle-income country which is located in South East Asia. The ethnic groups in Thailand are a minority group which has limited education and a different culture, language, costume and lifestyle from Thai people. The qualitative exploratory study was used in this study. In-depth interviews with semi-structured open questions were conducted by 20 participants from purposive sampling. These participants were the ethnic groups who have type 2 diabetes mellitus, received the services from a region hospital, understood Thai and were willing to participate. Content analysis was adopted for the study. The results showed that all of the participants controlled their diet before the appointment day and never miss their appointment. Only 3 participants did their exercise while 2 participants stated that they occasionally forgot to take medicine. 10 participants use the herbs for reducing the sugar level. 12 participants drank a lot of water after a lapse in the diet because they believed that water could dilute the sugar. The findings identified 5 themes; ‘controlling diet before appointment day’; ‘drinking water after a lapse in diet’; ‘medication being a vital importance’; ‘exercise is unimportant’; and ‘taking herbs for sugar reduction’. The results of this study are important to the health professionals to understand the self-management of Ethnic groups and use the data to create the appropriate intervention for promoting health among the ethnic groups with type 2 diabetes mellitus in Thailand. The findings will lead to the revision of health policy and the procedure for promoting health in this special ethnic groups.

Keywords: self-management, diabetes, ethnic groups, Thailand

Procedia PDF Downloads 287
7439 The Relation between Subtitling and General Translation from a Didactic Perspective

Authors: Sonia Gonzalez Cruz

Abstract:

Subtitling activities allow for acquiring and developing certain translation skills, and they also have a great impact on the students' motivation. Active subtitling is a relatively recent activity that has generated a lot of interest particularly in the field of second-language acquisition, but it is also present within both the didactics of general translation and language teaching for translators. It is interesting to analyze the level of inclusion of these new resources into the existent curricula and observe to what extent these different teaching methods are being used in the translation classroom. Although subtitling has already become an independent discipline of study and it is considered to be a type of translation on its own, it is necessary to do further research on the different didactic varieties that this type of audiovisual translation offers. Therefore, this project is framed within the field of the didactics of translation, and it focuses on the relationship between the didactics of general translation and active subtitling as a didactic tool. Its main objective is to analyze the inclusion of interlinguistic active subtitling in general translation curricula at different universities. As it has been observed so far, the analyzed curricula do not make any type of reference to the use of this didactic tool in general translation classrooms. However, they do register the inclusion of other audiovisual activities such as dubbing, script translation or video watching, among others. By means of online questionnaires and interviews, the main goal is to confirm the results obtained after the observation of the curricula and find out to what extent subtitling has actually been included into general translation classrooms.

Keywords: subtitling, general translation, didactics, translation competence

Procedia PDF Downloads 158
7438 Pattern Identification in Statistical Process Control Using Artificial Neural Networks

Authors: M. Pramila Devi, N. V. N. Indra Kiran

Abstract:

Control charts, predominantly in the form of X-bar chart, are important tools in statistical process control (SPC). They are useful in determining whether a process is behaving as intended or there are some unnatural causes of variation. A process is out of control if a point falls outside the control limits or a series of point’s exhibit an unnatural pattern. In this paper, a study is carried out on four training algorithms for CCPs recognition. For those algorithms optimal structure is identified and then they are studied for type I and type II errors for generalization without early stopping and with early stopping and the best one is proposed.

Keywords: control chart pattern recognition, neural network, backpropagation, generalization, early stopping

Procedia PDF Downloads 354
7437 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping

Authors: Emily Rowe

Abstract:

Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.

Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables

Procedia PDF Downloads 134
7436 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 229
7435 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 129
7434 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 222
7433 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 38
7432 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 430
7431 Synthesis and Characterization of Zinc (II) Complex and Its Catalytic Activity on C(SP3)-H Oxidation Reactions

Authors: Yalçın Kılıç, İbrahim Kani

Abstract:

The conversion of hydrocarbons to carbonyl compounds by oxidation reaction is one of the most important reactions in the synthesis of fine chemicals. As a result of the oxidation of hydrocarbons containing aliphatic sp3-CH groups in their structures, aldehydes, ketones or carboxylic acids can be obtained. In this study, OSSO-type 2,2'-[1,4-butanedylbis(thio)]bis-benzoic acid (tsabutH2) ligand and [Zn(µ-tsabut)(phen)]n complex (where phen = 1,10-phenantroline) were synthesized and their structures were characterized by single crystal x-ray diffraction method. The catalytic efficiency of the complex in the catalytic oxidation studies of organic compounds such as cyclohexane, ethylbenzene, diphenylmethane, and p-xylene containing sp3-C-H in its structure was investigated.

Keywords: metal complex, OSSO-type ligand, catalysis, oxidation

Procedia PDF Downloads 82
7430 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 235
7429 STAT6 Mediates Local and Systemic Fibrosis and Type Ii Immune Response via Macrophage Polarization during Acute and Chronic Pancreatitis in Murine Model

Authors: Hager Elsheikh, Matthias Sendler, Juliana Glaubnitz

Abstract:

In pancreatitis, an inflammatory reaction occurs in the pancreatic secretory cells due to premature activation of proteases, leading to pancreatic self-digestion and necrotic cell death of acinar cells. Acute pancreatitis in patients is characterized by a severe immune reaction that could lead to serious complications, such as organ failure or septic shock, if left untreated. Chronic pancreatitis is a recurrence of episodes of acute pancreatitis resulting in a fibro-inflammatory immune response, in which the type 2 immune response is primarily driven by AAMs in the pancreas. One of the most important signaling pathways for M2 macrophage activation is the IL-4/STAT6 pathway. Pancreatic fibrosis is induced by the hyperactivation of pancreatic stellate cells by dysregulation in the inflammatory response, leading to further damage, autodigestion and possibly necrosis of pancreatic acinar cells. The aim of this research is to investigate the effect of STAT6 knockout in disease severity and development of fibrosis wound healing in the presence of different macrophage populations, regulated by the type 2 immune response, after inducing chronic and/or acute pancreatitis in mice models via cerulean injection. We further investigate the influence of the JAK/STAT6 signaling pathway on the balance of fibrosis and regeneration in STAT6 deficient and wild-type mice. The characterization of resident and recruited macrophages will provide insight into the influence of the JAK/STAT6 signaling pathway on infiltrating cells and, ultimately, tissue fibrosis and disease severity.

Keywords: acute and chronic pancreatitis, tissue regeneration, macrophage polarization, Gastroenterology

Procedia PDF Downloads 47
7428 Compressive Strength Development of Normal Concrete and Self-Consolidating Concrete Incorporated with GGBS

Authors: M. Nili, S. Tavasoli, A. R. Yazdandoost

Abstract:

In this paper, an experimental investigation on the effect of Isfahan Ground Granulate Blast Furnace Slag (GGBS) on the compressive strength development of self-consolidating concrete (SCC) and normal concrete (NC) was performed. For this purpose, Portland cement type I was replaced with GGBS in various Portions. For NC and SCC Mixes, 10*10*10 cubic cm specimens were tested in 7, 28 and 91 days. It must be stated that in this research water to cement ratio was 0.44, cement used in cubic meter was 418 Kg/m³ and Superplasticizer (SP) Type III used in SCC based on Poly-Carboxylic acid. The results of experiments have shown that increasing GGBS Percentages in both types of concrete reduce Compressive strength in early ages.

Keywords: compressive strength, GGBS, normal concrete, self-consolidating concrete

Procedia PDF Downloads 414
7427 Heat Transfer Analysis of Corrugated Plate Heat Exchanger

Authors: Ketankumar Gandabhai Patel, Jalpit Balvantkumar Prajapati

Abstract:

Plate type heat exchangers has many thin plates that are slightly apart and have very large surface areas and fluid flow passages that are good for heat transfer. This can be a more effective heat exchanger than the tube or shell heat exchanger due to advances in brazing and gasket technology that have made this plate exchanger more practical. Plate type heat exchangers are most widely used in food processing industries and dairy industries. Mostly fouling occurs in plate type heat exchanger due to deposits create an insulating layer over the surface of the heat exchanger, that decreases the heat transfer between fluids and increases the pressure drop. The pressure drop increases as a result of the narrowing of the flow area, which increases the gap velocity. Therefore, the thermal performance of the heat exchanger decreases with time, resulting in an undersized heat exchanger and causing the process efficiency to be reduced. Heat exchangers are often over sized by 70 to 80%, of which 30 % to 50% is assigned to fouling. The fouling can be reduced by varying some geometric parameters and flow parameters. Based on the study, a correlation will estimate for Nusselt number as a function of Reynolds number, Prandtl number and chevron angle.

Keywords: heat transfer coefficient, single phase flow, mass flow rate, pressure drop

Procedia PDF Downloads 297
7426 Implementation of a Photo-Curable 3D Additive Manufacturing Technology with Grey Capability by Using Piezo Ink-jets

Authors: Ming-Jong Tsai, Y. L. Cheng, Y. L. Kuo, S. Y. Hsiao, J. W. Chen, P. H. Liu, D. H. Chen

Abstract:

The 3D printing is a combination of digital technology, material science, intelligent manufacturing and control of opto-mechatronics systems. It is called the third industrial revolution from the view of the Economist Journal. A color 3D printing machine may provide the necessary support for high value-added industrial and commercial design, architectural design, personal boutique, and 3D artist’s creation. The main goal of this paper is to develop photo-curable color 3D manufacturing technology and system implementation. The key technologies include (1) Photo-curable color 3D additive manufacturing processes development and materials research (2) Piezo type ink-jet head control and Opto-mechatronics integration technique of the photo-curable color 3D laminated manufacturing system. The proposed system is integrated with single Piezo type ink-jet head with two individual channels for two primary UV light curable color resins which can provide for future colorful 3D printing solutions. The main research results are 16 grey levels and grey resolution of 75 dpi.

Keywords: 3D printing, additive manufacturing, color, photo-curable, Piezo type ink-jet, UV Resin

Procedia PDF Downloads 541
7425 Risk Analysis of Flood Physical Vulnerability in Residential Areas of Mathare Nairobi, Kenya

Authors: James Kinyua Gitonga, Toshio Fujimi

Abstract:

Vulnerability assessment and analysis is essential to solving the degree of damage and loss as a result of natural disasters. Urban flooding causes a major economic loss and casualties, at Mathare residential area in Nairobi, Kenya. High population caused by rural-urban migration, Unemployment, and unplanned urban development are among factors that increase flood vulnerability in Mathare area. This study aims to analyse flood risk physical vulnerabilities in Mathare based on scientific data, research data that includes the Rainfall data, River Mathare discharge rate data, Water runoff data, field survey data and questionnaire survey through sampling of the study area have been used to develop the risk curves. Three structural types of building were identified in the study area, vulnerability and risk curves were made for these three structural types by plotting the relationship between flood depth and damage for each structural type. The results indicate that the structural type with mud wall and mud floor is the most vulnerable building to flooding while the structural type with stone walls and concrete floor is least vulnerable. The vulnerability of building contents is mainly determined by the number of floors, where households with two floors are least vulnerable, and households with a one floor are most vulnerable. Therefore more than 80% of the residential buildings including the property in the building are highly vulnerable to floods consequently exposed to high risk. When estimating the potential casualties/injuries we discovered that the structural types of houses were major determinants where the mud/adobe structural type had casualties of 83.7% while the Masonry structural type had casualties of 10.71% of the people living in these houses. This research concludes that flood awareness, warnings and observing the building codes will enable reduce damage to the structural types of building, deaths and reduce damage to the building contents.

Keywords: flood loss, Mathare Nairobi, risk curve analysis, vulnerability

Procedia PDF Downloads 221
7424 Chaotic Electronic System with Lambda Diode

Authors: George Mahalu

Abstract:

The Chua diode has been configured over time in various ways, using electronic structures like as operational amplifiers (OAs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paper-work proposed here uses in the modeling a lambda diode type configuration consisting of two Junction Field Effect Transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.

Keywords: chaos, lambda diode, strange attractor, nonlinear system

Procedia PDF Downloads 64
7423 The Descriptions of vBloggers with Type 1 Diabetes about Overcoming Diabetes Burnout

Authors: Samereh Abdoli, Amit Vora, Anusha Vora

Abstract:

Background: Diabetes burnout is one of the most common contributors to decreased quality of life, poor psychosocial well-being, and increased morbidity, mortality and diabetes cost. While the term diabetes burnout is widely accepted particularly in type 1 diabetes (T1D), the state of the science on diabetes burnout is lacking a systematic approach to overcome diabetes burnout. Objective: The study aimed to explore the strategies to overcome burnout by integrating the voices of individuals with T1D. Methods: In this study, we applied a descriptive qualitative design using YouTube videos produced by individuals with T1D. Seven YouTube videos (Austria= 1, U.S=6) with the highest rate of views which met the inclusion criteria were analyzed using a qualitative content analysis approach. Results: Participants verbalized overcoming diabetes burnout as a 'difficult hole to climb out of' which make them empowered. Themes that describes their strategies to overcome burnout in T1D, in general, include; 'make plan and take action', 'start with small steps', 'ask for help', 'get engage in diabetes community' and 'do not be perfect'. Future Work: These findings can begin the examination of different strategies to overcome diabetes burnout, which may change the course of action for diabetes care and management to improve quality of diabetes care and quality of life.

Keywords: diabetes burnout, type 1 diabetes, qualitative research, YouTube videos

Procedia PDF Downloads 135
7422 Solving Stochastic Eigenvalue Problem of Wick Type

Authors: Hassan Manouzi, Taous-Meriem Laleg-Kirati

Abstract:

In this paper we study mathematically the eigenvalue problem for stochastic elliptic partial differential equation of Wick type. Using the Wick-product and the Wiener-Ito chaos expansion, the stochastic eigenvalue problem is reformulated as a system of an eigenvalue problem for a deterministic partial differential equation and elliptic partial differential equations by using the Fredholm alternative. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated.

Keywords: eigenvalue problem, Wick product, SPDEs, finite element, Wiener-Ito chaos expansion

Procedia PDF Downloads 345
7421 The Phylogenetic Investigation of Candidate Genes Related to Type II Diabetes in Man and Other Species

Authors: Srijoni Banerjee

Abstract:

Sequences of some of the candidate genes (e.g., CPE, CDKAL1, GCKR, HSD11B1, IGF2BP2, IRS1, LPIN1, PKLR, TNF, PPARG) implicated in some of the complex disease, e.g. Type II diabetes in man has been compared with other species to investigate phylogenetic affinity. Based on mRNA sequence of these genes of 7 to 8 species, using bioinformatics tools Mega 5, Bioedit, Clustal W, distance matrix was obtained. Phylogenetic trees were obtained by NJ and UPGMA clustering methods. The results of the phylogenetic analyses show that of the species compared: Xenopus l., Danio r., Macaca m., Homo sapiens s., Rattus n., Mus m. and Gallus g., Bos taurus, both NJ and UPGMA clustering show close affinity between clustering of Homo sapiens s. (Man) with Rattus n. (Rat), Mus m. species for the candidate genes, except in case of Lipin1 gene. The results support the functional similarity of these genes in physiological and biochemical process involving man and mouse/rat. Therefore, in understanding the complex etiology and treatment of the complex disease mouse/rate model is the best laboratory choice for experimentation.

Keywords: phylogeny, candidate gene of type-2 diabetes, CPE, CDKAL1, GCKR, HSD11B1, IGF2BP2, IRS1, LPIN1, PKLR, TNF, PPARG

Procedia PDF Downloads 301