Search results for: optimal bias techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9947

Search results for: optimal bias techniques

9257 Extracellular Phytase from Lactobacillus fermentum spp KA1: Optimization of Enzyme Production and Its Application for Improving the Nutritional Quality of Rice Bran

Authors: Neha Sharma, Kanthi K. Kondepudi, Naveen Gupta

Abstract:

Phytases are phytate specific phosphatases catalyzing the step-wise dephosphorylation of phytate, which acts as an anti-nutritional factor in food due to its strong binding capacity to minerals. In recent years microbial phytases have been explored for improving nutritional quality of food. But the major limitation is acceptability of phytases from these microorganisms. Therefore, efforts are being made to isolate organisms which are generally regarded as safe for human consumption such as Lactic Acid Bacteria (LAB). Phytases from these organisms will have an edge over other phytase sources due to its probiotic attributes. Only few LAB have been reported to give phytase activity that too is generally seen as intracellular. LAB producing extracellular phytase will be more useful as it can degrade phytate more effectively. Moreover, enzyme from such isolate will have application in food processing also. Only few species of Lactobacillus producing extracellular phytase have been reported so far. This study reports the isolation of a probiotic strain of Lactobacillus fermentum spp KA1 which produces extracellular phytase. Conditions for the optimal production of phytase have been optimized and the enzyme production resulted in an approximately 13-fold increase in yield. The phytate degradation potential of extracellular phytase in rice bran has been explored and conditions for optimal degradation were optimized. Under optimal conditions, there was 43.26% release of inorganic phosphate and 6.45% decrease of phytate content.

Keywords: Lactobacillus, phytase, phytate reduction, rice bran

Procedia PDF Downloads 186
9256 Cost Reduction Techniques for Provision of Shelter to Homeless

Authors: Mukul Anand

Abstract:

Quality oriented affordable shelter for all has always been the key issue in the housing sector of our country. Homelessness is the acute form of housing need. It is a paradox that in spite of innumerable government initiated programmes for affordable housing, certain section of society is still devoid of shelter. About nineteen million (18.78 million) households grapple with housing shortage in Urban India in 2012. In Indian scenario there is major mismatch between the people for whom the houses are being built and those who need them. The prime force faced by public authorities in facilitation of quality housing for all is high cost of construction. The present paper will comprehend executable techniques for dilution of cost factor in housing the homeless. The key actors responsible for delivery of cheap housing stock such as capacity building, resource optimization, innovative low cost building material and indigenous skeleton housing system will also be incorporated in developing these techniques. Time performance, which is an important angle of above actors, will also be explored so as to increase the effectiveness of low cost housing. Along with this best practices will be taken up as case studies where both conventional techniques of housing and innovative low cost housing techniques would be cited. Transportation consists of approximately 30% of total construction budget. Thus use of alternative local solutions depending upon the region would be covered so as to highlight major components of low cost housing. Government is laid back regarding base line information on use of innovative low cost method and technique of resource optimization. Therefore, the paper would be an attempt to bring to light simpler solutions for achieving low cost housing.

Keywords: construction, cost, housing, optimization, shelter

Procedia PDF Downloads 440
9255 Designing Ecologically and Economically Optimal Electric Vehicle Charging Stations

Authors: Y. Ghiassi-Farrokhfal

Abstract:

The number of electric vehicles (EVs) is increasing worldwide. Replacing gas fueled cars with EVs reduces carbon emission. However, the extensive energy consumption of EVs stresses the energy systems, requiring non-green sources of energy (such as gas turbines) to compensate for the new energy demand caused by EVs in the energy systems. To make EVs even a greener solution for the future energy systems, new EV charging stations are equipped with solar PV panels and batteries. This will help serve the energy demand of EVs through the green energy of solar panels. To ensure energy availability, solar panels are combined with batteries. The energy surplus at any point is stored in batteries and is used when there is not enough solar energy to serve the demand. While EV charging stations equipped with solar panels and batteries are green and ecologically optimal, they might not be financially viable solutions, due to battery prices. To make the system viable, we should size the battery economically and operate the system optimally. This is, in general, a challenging problem because of the stochastic nature of the EV arrivals at the charging station, the available solar energy, and the battery operating system. In this work, we provide a mathematical model for this problem and we compute the return on investment (ROI) of such a system, which is designed to be ecologically and financially optimal. We also quantify the minimum required investment in terms of battery and solar panels along with the operating strategy to ensure that a charging station has enough energy to serve its EV demand at any time.

Keywords: solar energy, battery storage, electric vehicle, charging stations

Procedia PDF Downloads 213
9254 Differentiation of the Functional in an Optimization Problem for Coefficients of Elliptic Equations with Unbounded Nonlinearity

Authors: Aigul Manapova

Abstract:

We consider an optimal control problem in the higher coefficient of nonlinear equations with a divergent elliptic operator and unbounded nonlinearity, and the Dirichlet boundary condition. The conditions imposed on the coefficients of the state equation are assumed to hold only in a small neighborhood of the exact solution to the original problem. This assumption suggests that the state equation involves nonlinearities of unlimited growth and considerably expands the class of admissible functions as solutions of the state equation. We obtain formulas for the first partial derivatives of the objective functional with respect to the control functions. To calculate the gradients the numerical solutions of the state and adjoint problems are used. We also prove that the gradient of the cost function is Lipchitz continuous.

Keywords: cost functional, differentiability, divergent elliptic operator, optimal control, unbounded nonlinearity

Procedia PDF Downloads 166
9253 Analysis of Expression Data Using Unsupervised Techniques

Authors: M. A. I Perera, C. R. Wijesinghe, A. R. Weerasinghe

Abstract:

his study was conducted to review and identify the unsupervised techniques that can be employed to analyze gene expression data in order to identify better subtypes of tumors. Identifying subtypes of cancer help in improving the efficacy and reducing the toxicity of the treatments by identifying clues to find target therapeutics. Process of gene expression data analysis described under three steps as preprocessing, clustering, and cluster validation. Feature selection is important since the genomic data are high dimensional with a large number of features compared to samples. Hierarchical clustering and K Means are often used in the analysis of gene expression data. There are several cluster validation techniques used in validating the clusters. Heatmaps are an effective external validation method that allows comparing the identified classes with clinical variables and visual analysis of the classes.

Keywords: cancer subtypes, gene expression data analysis, clustering, cluster validation

Procedia PDF Downloads 144
9252 Fake News Detection for Korean News Using Machine Learning Techniques

Authors: Tae-Uk Yun, Pullip Chung, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Fake news is defined as the news articles that are intentionally and verifiably false, and could mislead readers. Spread of fake news may provoke anxiety, chaos, fear, or irrational decisions of the public. Thus, detecting fake news and preventing its spread has become very important issue in our society. However, due to the huge amount of fake news produced every day, it is almost impossible to identify it by a human. Under this context, researchers have tried to develop automated fake news detection using machine learning techniques over the past years. But, there have been no prior studies proposed an automated fake news detection method for Korean news to our best knowledge. In this study, we aim to detect Korean fake news using text mining and machine learning techniques. Our proposed method consists of two steps. In the first step, the news contents to be analyzed is convert to quantified values using various text mining techniques (topic modeling, TF-IDF, and so on). After that, in step 2, classifiers are trained using the values produced in step 1. As the classifiers, machine learning techniques such as logistic regression, backpropagation network, support vector machine, and deep neural network can be applied. To validate the effectiveness of the proposed method, we collected about 200 short Korean news from Seoul National University’s FactCheck. which provides with detailed analysis reports from 20 media outlets and links to source documents for each case. Using this dataset, we will identify which text features are important as well as which classifiers are effective in detecting Korean fake news.

Keywords: fake news detection, Korean news, machine learning, text mining

Procedia PDF Downloads 269
9251 Planning a Haemodialysis Process by Minimum Time Control of Hybrid Systems with Sliding Motion

Authors: Radoslaw Pytlak, Damian Suski

Abstract:

The aim of the paper is to provide a computational tool for planning a haemodialysis process. It is shown that optimization methods can be used to obtain the most effective treatment focused on removing both urea and phosphorus during the process. In order to achieve that, the IV–compartment model of phosphorus kinetics is applied. This kinetics model takes into account a rebound phenomenon that can occur during haemodialysis and results in a hybrid model of the process. Furthermore, vector fields associated with the model equations are such that it is very likely that using the most intuitive objective functions in the planning problem could lead to solutions which include sliding motions. Therefore, building computational tools for solving the problem of planning a haemodialysis process has required constructing numerical algorithms for solving optimal control problems with hybrid systems. The paper concentrates on minimum time control of hybrid systems since this control objective is the most suitable for the haemodialysis process considered in the paper. The presented approach to optimal control problems with hybrid systems is different from the others in several aspects. First of all, it is assumed that a hybrid system can exhibit sliding modes. Secondly, the system’s motion on the switching surface is described by index 2 differential–algebraic equations, and that guarantees accurate tracking of the sliding motion surface. Thirdly, the gradients of the problem’s functionals are evaluated with the help of adjoint equations. The adjoint equations presented in the paper take into account sliding motion and exhibit jump conditions at transition times. The optimality conditions in the form of the weak maximum principle for optimal control problems with hybrid systems exhibiting sliding modes and with piecewise constant controls are stated. The presented sensitivity analysis can be used to construct globally convergent algorithms for solving considered problems. The paper presents numerical results of solving the haemodialysis planning problem.

Keywords: haemodialysis planning process, hybrid systems, optimal control, sliding motion

Procedia PDF Downloads 191
9250 A Comparative Analysis of Various Companding Techniques Used to Reduce PAPR in VLC Systems

Authors: Arushi Singh, Anjana Jain, Prakash Vyavahare

Abstract:

Recently, Li-Fi(light-fiedelity) has been launched based on VLC(visible light communication) technique, 100 times faster than WiFi. Now 5G mobile communication system is proposed to use VLC-OFDM as the transmission technique. The VLC system focused on visible rays, is considered for efficient spectrum use and easy intensity modulation through LEDs. The reason of high speed in VLC is LED, as they flicker incredibly fast(order of MHz). Another advantage of employing LED is-it acts as low pass filter results no out-of-band emission. The VLC system falls under the category of ‘green technology’ for utilizing LEDs. In present scenario, OFDM is used for high data-rates, interference immunity and high spectral efficiency. Inspite of the advantages OFDM suffers from large PAPR, ICI among carriers and frequency offset errors. Since, the data transmission technique used in VLC system is OFDM, the system suffers the drawbacks of OFDM as well as VLC, the non-linearity dues to non-linear characteristics of LED and PAPR of OFDM due to which the high power amplifier enters in non-linear region. The proposed paper focuses on reduction of PAPR in VLC-OFDM systems. Many techniques are applied to reduce PAPR such as-clipping-introduces distortion in the carrier; selective mapping technique-suffers wastage of bandwidth; partial transmit sequence-very complex due to exponentially increased number of sub-blocks. The paper discusses three companding techniques namely- µ-law, A-law and advance A-law companding technique. The analysis shows that the advance A-law companding techniques reduces the PAPR of the signal by adjusting the companding parameter within the range. VLC-OFDM systems are the future of the wireless communication but non-linearity in VLC-OFDM is a severe issue. The proposed paper discusses the techniques to reduce PAPR, one of the non-linearities of the system. The companding techniques mentioned in this paper provides better results without increasing the complexity of the system.

Keywords: non-linear companding techniques, peak to average power ratio (PAPR), visible light communication (VLC), VLC-OFDM

Procedia PDF Downloads 279
9249 Synthesis of MIPs towards Precursors and Intermediates of Illicit Drugs and Their following Application in Sensing Unit

Authors: K. Graniczkowska, N. Beloglazova, S. De Saeger

Abstract:

The threat of synthetic drugs is one of the most significant current drug problems worldwide. The use of drugs of abuse has increased dramatically during the past three decades. Among others, Amphetamine-Type Stimulants (ATS) are globally the second most widely used drugs after cannabis, exceeding the use of cocaine and heroin. ATS are potent central nervous system (CNS) stimulants, capable of inducing euphoric static similar to cocaine. Recreational use of ATS is widespread, even though warnings of irreversible damage of the CNS were reported. ATS pose a big problem and their production contributes to the pollution of the environment by discharging big volumes of liquid waste to sewage system. Therefore, there is a demand to develop robust and sensitive sensors that can detect ATS and their intermediates in environmental water samples. A rapid and simple test is required. Analysis of environmental water samples (which sometimes can be a harsh environment) using antibody-based tests cannot be applied. Therefore, molecular imprinted polymers (MIPs), which are known as synthetic antibodies, have been chosen for that approach. MIPs are characterized with a high mechanical and thermal stability, show chemical resistance in a broad pH range and various organic or aqueous solvents. These properties make them the preferred type of receptors for application in the harsh conditions imposed by environmental samples. To the best of our knowledge, there are no existing MIPs-based sensors toward amphetamine and its intermediates. Also not many commercial MIPs for this application are available. Therefore, the aim of this study was to compare different techniques to obtain MIPs with high specificity towards ATS and characterize them for following use in a sensing unit. MIPs against amphetamine and its intermediates were synthesized using a few different techniques, such as electro-, thermo- and UV-initiated polymerization. Different monomers, cross linkers and initiators, in various ratios, were tested to obtain the best sensitivity and polymers properties. Subsequently, specificity and selectivity were compared with commercially available MIPs against amphetamine. Different linkers, such as lipoic acid, 3-mercaptopioponic acid and tyramine were examined, in combination with several immobilization techniques, to select the best procedure for attaching particles on sensor surface. Performed experiments allowed choosing an optimal method for the intended sensor application. Stability of MIPs in extreme conditions, such as highly acidic or basic was determined. Obtained results led to the conclusion about MIPs based sensor applicability in sewage system testing.

Keywords: amphetamine type stimulants, environment, molecular imprinted polymers, MIPs, sensor

Procedia PDF Downloads 246
9248 Highly Responsive p-NiO/n-rGO Heterojunction Based Self-Powered UV Photodetectors

Authors: P. Joshna, Souvik Kundu

Abstract:

Detection of ultraviolet (UV) radiation is very important as it has exhibited a profound influence on humankind and other existences, including military equipment. In this work, a self-powered UV photodetector was reported based on oxides heterojunctions. The thin films of p-type nickel oxide (NiO) and n-type reduced graphene oxide (rGO) were used for the formation of p-n heterojunction. Low-Cost and low-temperature chemical synthesis was utilized to prepare the oxides, and the spin coating technique was employed to deposit those onto indium doped tin oxide (ITO) coated glass substrates. The top electrode platinum was deposited utilizing physical vapor evaporation technique. NiO offers strong UV absorption with high hole mobility, and rGO prevents the recombination rate by separating electrons out from the photogenerated carriers. Several structural characterizations such as x-ray diffraction, atomic force microscope, scanning electron microscope were used to study the materials crystallinity, microstructures, and surface roughness. On one side, the oxides were found to be polycrystalline in nature, and no secondary phases were present. On the other side, surface roughness was found to be low with no pit holes, which depicts the formation of high-quality oxides thin films. Whereas, x-ray photoelectron spectroscopy was employed to study the chemical compositions and oxidation structures. The electrical characterizations such as current-voltage and current response were also performed on the device to determine the responsivity, detectivity, and external quantum efficiency under dark and UV illumination. This p-n heterojunction device offered faster photoresponse and high on-off ratio under 365 nm UV light illumination of zero bias. The device based on the proposed architecture shows the efficacy of the oxides heterojunction for efficient UV photodetection under zero bias, which opens up a new path towards the development of self-powered photodetector for environment and health monitoring sector.

Keywords: chemical synthesis, oxides, photodetectors, spin coating

Procedia PDF Downloads 120
9247 Examining Gender Bias in the Sport Concussion Assessment Tool 3 (SCAT3): A Differential Item Functioning Analysis in NCAA Sports

Authors: Rachel M. Edelstein, John D. Van Horn, Karen M. Schmidt, Sydney N. Cushing

Abstract:

As a consequence of sports-related concussions, female athletes have been documented as reporting more symptoms than their male counterparts, in addition to incurring longer periods of recovery. However, the role of sex and its potential influence on symptom reporting and recovery outcomes in concussion management has not been completely explored. The present aims to investigate the relationship between female concussion symptom severity and the presence of assessment bias. The Sport Concussion Assessment Tool 3 (SCAT3), collected by the NCAA and DoD CARE Consortium, was quantified at five different time points post-concussion. N= 1,258 NCAA athletes, n= 473 female (soccer, rugby, lacrosse, ice hockey) and n=785 male athletes (football, rugby, lacrosse, ice hockey). A polytomous Item Response Theory (IRT) Graded Response Model (GRM) was used to assess the relationship between sex and symptom reporting. Differential Item Functioning (DIF) and Differential Group Functioning (DGF) were used to examine potential group-level bias. Interactions for DIF were utilized to explore the impact of sex on symptom reporting among NCAA male and female athletes throughout and after their concussion recovery. DIF was significantly detected after B-H corrections displayed in limited items; however, one symptom, “Pressure in Head” (-0.29, p=0.04 vs -0.20, p =0.04), was statistically significant at both < 6 hours and 24-48 hours. Thus, implies that at < 6 hours, males were 29% less likely to indicate “Pressure in the Head” compared to female athletes and 20% less likely at 24-48 hours. Overall, the DGF suggested significant group differences, suggesting that male athletes might be at a higher risk for returning to play prematurely (logits = -0.38, p < 0.001). However, after analyzing the SCAT 3, a clinically relevant trend was discovered. Twelve out of the twenty-two symptoms suggest higher difficulty in female athletes within three or more of the five-time points. These symptoms include Balance Problems, Blurry Vision, Confusion, Dizziness, Don’t Feel Right, Feel in Fog, Feel Slow Down, Low Energy, Neck Pain, Sensitivity to Light, Sensitivity to Noise, Trouble Falling Asleep. Despite a lack of statistical significance, this tendency is contrary to current literature stating that males may be unclear on symptoms, but females may be more honest in reporting symptoms. Further research, which includes possible modifying socioecological factors, is needed to determine whether females may consistently experience more symptoms and require longer recovery times or if, parsimoniously, males tend to present their symptoms and readiness for play differently than females. Such research will help to improve the validity of current assumptions concerning male as compared to female head injuries and optimize individualized treatments for sports-related head injuries.

Keywords: female athlete, sports-related concussion, item response theory, concussion assessment

Procedia PDF Downloads 69
9246 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 410
9245 Study of the Hysteretic I-V Characteristics in a Polystyrene/ZnO-Nanorods Stack Layer

Authors: You-Lin Wu, Yi-Hsing Sung, Shih-Hung Lin, Jing-Jenn Lin

Abstract:

Performance improvement in optoelectronic devices such as solar cells and photodetectors has been reported when a polymer/ZnO nanorods stack is used. Resistance switching of polymer/ZnO nanocrystals (or nanorods) hybrid has also gained a lot of research interests recently. It has been reported that high- and low-resistance states of a metal/insulator/metal (MIM) structure diode with a polystyrene (PS) and ZnO hybrid as the insulator layer can be switched by applied bias after a high-voltage forming process, while the same device structure merely with a PS layer does not show any forming behavior. In this work, we investigated the current-voltage (I-V) characteristics of an MIM device with a PS/ZnO nanorods stack deposited on fluorine-doped tin oxide (FTO) glass substrate. The ZnO nanorods were grown by a hydrothermal method using a mixture of zinc nitrate, hexamethylenetetramine, and DI water. Following that, a PS layer was deposited by spin coating. Finally, the device with a structure of Ti/ PS/ZnO nanorods/FTO was completed by e-gun evaporated Ti layer on top of the PS layer. Semiconductor parameters analyzer Agilent 4156C was then used to measure the I-V characteristics of the device by applying linear ramp sweep voltage with sweep sequence of 0V → 4V → 0V → 3V → 0V → 2V → 0V → 1V → 0V in both positive and negative directions. It is interesting to find that the I-V characteristics are bias dependent and hysteretic, indicating that the device Ti/PS/ZnO nanorods/FTO structure has ferroelectricity. Our results also show that the maximum hysteresis loop height of the I-V characteristics as well as the voltage at which the maximum hysteresis loop height of each scan occurs increase with increasing maximum sweep voltage. It should be noticed that, although ferroelectricity has been found in ZnO at its melting temperature (1975℃) and in Li- or Co-doped ZnO, neither PS nor ZnO has ferroelectricity at room temperature. Using the same structure but with a PS or ZnO layer only as the insulator does not give and hysteretic I-V characteristics. It is believed that a charge polarization layer is induced near the PS/ZnO nanorods stack interface and thus causes the ferroelectricity in the device with Ti/PS/ZnO nanorods/FTO structure. Our results show that the PS/ZnO stack can find a potential application in a resistive switching memory device with MIM structure.

Keywords: ferroelectricity, hysteresis, polystyrene, resistance switching, ZnO nanorods

Procedia PDF Downloads 309
9244 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs

Authors: Junaid Mahmood Alam

Abstract:

Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.

Keywords: revalidation, standardized, IFCC, CAP, harmonized

Procedia PDF Downloads 265
9243 Optimal Design of Concrete Shells by Modified Particle Community Algorithm Using Spinless Curves

Authors: Reza Abbasi, Ahmad Hamidi Benam

Abstract:

Shell structures have many geometrical variables that modify some of these parameters to improve the mechanical behavior of the shell. On the other hand, the behavior of such structures depends on their geometry rather than on mass. Optimization techniques are useful in finding the geometrical shape of shell structures to improve mechanical behavior, especially to prevent or reduce bending anchors. The overall objective of this research is to optimize the shape of concrete shells using the thickness and height parameters along the reference curve and the overall shape of this curve. To implement the proposed scheme, the geometry of the structure was formulated using nonlinear curves. Shell optimization was performed under equivalent static loading conditions using the modified bird community algorithm. The results of this optimization show that without disrupting the initial design and with slight changes in the shell geometry, the structural behavior is significantly improved.

Keywords: concrete shells, shape optimization, spinless curves, modified particle community algorithm

Procedia PDF Downloads 226
9242 Global Analysis in a Growth Economic Model with Perfect-Substitution Technologies

Authors: Paolo Russu

Abstract:

The purpose of the present paper is to highlight some features of an economic growth model with environmental negative externalities, giving rise to a three-dimensional dynamic system. In particular, we show that the economy, which is based on a Perfect-Substitution Technologies function of production, has no neither indeterminacy nor poverty trap. This implies that equilibrium select by economy depends on the history (initial values of state variable) of the economy rather than on expectations of economies agents. Moreover, by contrast, we prove that the basin of attraction of locally equilibrium points may be very large, as they can extend up to the boundary of the system phase space. The infinite-horizon optimal control problem has the purpose of maximizing the representative agent’s instantaneous utility function depending on leisure and consumption.

Keywords: Hopf bifurcation, open-access natural resources, optimal control, perfect-substitution technologies, Poincarè compactification

Procedia PDF Downloads 166
9241 Random Access in IoT Using Naïve Bayes Classification

Authors: Alhusein Almahjoub, Dongyu Qiu

Abstract:

This paper deals with the random access procedure in next-generation networks and presents the solution to reduce total service time (TST) which is one of the most important performance metrics in current and future internet of things (IoT) based networks. The proposed solution focuses on the calculation of optimal transmission probability which maximizes the success probability and reduces TST. It uses the information of several idle preambles in every time slot, and based on it, it estimates the number of backlogged IoT devices using Naïve Bayes estimation which is a type of supervised learning in the machine learning domain. The estimation of backlogged devices is necessary since optimal transmission probability depends on it and the eNodeB does not have information about it. The simulations are carried out in MATLAB which verify that the proposed solution gives excellent performance.

Keywords: random access, LTE/LTE-A, 5G, machine learning, Naïve Bayes estimation

Procedia PDF Downloads 141
9240 The Structure and Development of a Wing Tip Vortex under the Effect of Synthetic Jet Actuation

Authors: Marouen Dghim, Mohsen Ferchichi

Abstract:

The effect of synthetic jet actuation on the roll-up and the development of a wing tip vortex downstream a square-tipped rectangular wing was investigated experimentally using hotwire anemometry. The wing is equipped with a hallow cavity designed to generate a high aspect ratio synthetic jets blowing at an angles with respect to the spanwise direction. The structure of the wing tip vortex under the effect of fluidic actuation was examined at a chord Reynolds number Re_c=8×10^4. An extensive qualitative study on the effect of actuation on the spanwise pressure distribution at c⁄4 was achieved using pressure scanner measurements in order to determine the optimal actuation parameters namely, the blowing momentum coefficient, Cμ, and the non-dimensionalized actuation frequency, F^+. A qualitative study on the effect of actuation parameters on the spanwise pressure distribution showed that optimal actuation frequencies of the synthetic jet were found within the range amplified by both long and short wave instabilities where spanwise pressure coefficients exhibited a considerable decrease by up to 60%. The vortex appeared larger and more diffuse than that of the natural vortex case. Operating the synthetic jet seemed to introduce unsteadiness and turbulence into the vortex core. Based on the ‘a priori’ optimal selected parameters, results of the hotwire wake survey indicated that the actuation achieved a reduction and broadening of the axial velocity deficit. A decrease in the peak tangential velocity associated with an increase in the vortex core radius was reported as a result of the accelerated radial transport of angular momentum. Peak vorticity level near the core was also found to be largely diffused as a direct result of the increased turbulent mixing within the vortex. The wing tip vortex a exhibited a reduced strength and a diffused core as a direct result of increased turbulent mixing due to the presence of turbulent small scale vortices within its core. It is believed that the increased turbulence within the vortex due to the synthetic jet control was the main mechanism associated with the decreased strength and increased size of the wing tip vortex as it evolves downstream. A comparison with a ‘non-optimal’ case was included to demonstrate the effectiveness of selecting the appropriate control parameters. The Synthetic Jet will be operated at various actuation configurations and an extensive parametric study is projected to determine the optimal actuation parameters.

Keywords: flow control, hotwire anemometry, synthetic jet, wing tip vortex

Procedia PDF Downloads 430
9239 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 233
9238 Accuracy of Fitbit Charge 4 for Measuring Heart Rate in Parkinson’s Patients During Intense Exercise

Authors: Giulia Colonna, Jocelyn Hoye, Bart de Laat, Gelsina Stanley, Jose Key, Alaaddin Ibrahimy, Sule Tinaz, Evan D. Morris

Abstract:

Parkinson’s disease (PD) is the second most common neurodegenerative disease and affects approximately 1% of the world’s population. Increasing evidence suggests that aerobic physical exercise can be beneficial in mitigating both motor and non-motor symptoms of the disease. In a recent pilot study of the role of exercise on PD, we sought to confirm exercise intensity by monitoring heart rate (HR). For this purpose, we asked participants to wear a chest strap heart rate monitor (Polar Electro Oy, Kempele). The device sometimes proved uncomfortable. Looking forward to larger clinical trials, it would be convenient to employ a more comfortable and user friendly device. The Fitbit Charge 4 (Fitbit Inc) is a potentially comfortable, user-friendly solution since it is a wrist-worn heart rate monitor. Polar H10 has been used in large trials, and for our purposes, we treated it as the gold standard for the beat-to-beat period (R-R interval) assessment. In previous literature, it has been shown that Fitbit Charge 4 has comparable accuracy to Polar H10 in healthy subjects. It has yet to be determined if the Fitbit is as accurate as the Polar H10 in subjects with PD or in clinical populations, generally. Goal: To compare the Fitbit Charge 4 to the Polar H10 for monitoring HR in PD subjects engaging in an intensive exercise program. Methods: A total of 596 exercise sessions from 11 subjects (6 males) were collected simultaneously by both devices. Subjects with early-stage PD (Hoehn & Yahr <=2) were enrolled in a 6 months exercise training program designed for PD patients. Subjects participated in 3 one-hour exercise sessions per week. They wore both Fitbit and Polar H10 during each session. Sessions included rest, warm-up, intensive exercise, and cool-down periods. We calculated the bias in the HR via Fitbit under rest (5min) and intensive exercise (20min) by comparing the mean HR during each of the periods to the respective means measured by the Polar (HRFitbit – HRPolar). We also measured the sensitivity and specificity of Fitbit for detecting HRs that exceed the threshold for intensive exercise, defined as 70% of an individual’s theoretical maximum HR. Different types of correlation between the two devices were investigated. Results: The mean bias was 1.68 bpm at rest and 6.29 bpm during high intensity exercise, with an overestimation by Fitbit in both conditions. The mean bias of Fitbit across both rest and intensive exercise periods was 3.98 bpm. The sensitivity of the device in identifying high intensity exercise sessions was 97.14 %. The correlation between the two devices was non-linear, suggesting a saturation tendency of Fitbit to saturate at high values of HR. Conclusion: The performance of Fitbit Charge 4 is comparable to Polar H10 for assessing exercise intensity in a cohort of PD subjects. The device should be considered a reasonable replacement for the more cumbersome chest strap technology in future similar studies of clinical populations.

Keywords: fitbit, heart rate measurements, parkinson’s disease, wrist-wearable devices

Procedia PDF Downloads 99
9237 Hybrid Intelligent Optimization Methods for Optimal Design of Horizontal-Axis Wind Turbine Blades

Authors: E. Tandis, E. Assareh

Abstract:

Designing the optimal shape of MW wind turbine blades is provided in a number of cases through evolutionary algorithms associated with mathematical modeling (Blade Element Momentum Theory). Evolutionary algorithms, among the optimization methods, enjoy many advantages, particularly in stability. However, they usually need a large number of function evaluations. Since there are a large number of local extremes, the optimization method has to find the global extreme accurately. The present paper introduces a new population-based hybrid algorithm called Genetic-Based Bees Algorithm (GBBA). This algorithm is meant to design the optimal shape for MW wind turbine blades. The current method employs crossover and neighborhood searching operators taken from the respective Genetic Algorithm (GA) and Bees Algorithm (BA) to provide a method with good performance in accuracy and speed convergence. Different blade designs, twenty-one to be exact, were considered based on the chord length, twist angle and tip speed ratio using GA results. They were compared with BA and GBBA optimum design results targeting the power coefficient and solidity. The results suggest that the final shape, obtained by the proposed hybrid algorithm, performs better compared to either BA or GA. Furthermore, the accuracy and speed convergence increases when the GBBA is employed

Keywords: Blade Design, Optimization, Genetic Algorithm, Bees Algorithm, Genetic-Based Bees Algorithm, Large Wind Turbine

Procedia PDF Downloads 313
9236 Developing an Advanced Algorithm Capable of Classifying News, Articles and Other Textual Documents Using Text Mining Techniques

Authors: R. B. Knudsen, O. T. Rasmussen, R. A. Alphinas

Abstract:

The reason for conducting this research is to develop an algorithm that is capable of classifying news articles from the automobile industry, according to the competitive actions that they entail, with the use of Text Mining (TM) methods. It is needed to test how to properly preprocess the data for this research by preparing pipelines which fits each algorithm the best. The pipelines are tested along with nine different classification algorithms in the realm of regression, support vector machines, and neural networks. Preliminary testing for identifying the optimal pipelines and algorithms resulted in the selection of two algorithms with two different pipelines. The two algorithms are Logistic Regression (LR) and Artificial Neural Network (ANN). These algorithms are optimized further, where several parameters of each algorithm are tested. The best result is achieved with the ANN. The final model yields an accuracy of 0.79, a precision of 0.80, a recall of 0.78, and an F1 score of 0.76. By removing three of the classes that created noise, the final algorithm is capable of reaching an accuracy of 94%.

Keywords: Artificial Neural network, Competitive dynamics, Logistic Regression, Text classification, Text mining

Procedia PDF Downloads 117
9235 Model of Optimal Centroids Approach for Multivariate Data Classification

Authors: Pham Van Nha, Le Cam Binh

Abstract:

Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm. PSO was inspired by the natural behavior of birds and fish in migration and foraging for food. PSO is considered as a multidisciplinary optimization model that can be applied in various optimization problems. PSO’s ideas are simple and easy to understand but PSO is only applied in simple model problems. We think that in order to expand the applicability of PSO in complex problems, PSO should be described more explicitly in the form of a mathematical model. In this paper, we represent PSO in a mathematical model and apply in the multivariate data classification. First, PSOs general mathematical model (MPSO) is analyzed as a universal optimization model. Then, Model of Optimal Centroids (MOC) is proposed for the multivariate data classification. Experiments were conducted on some benchmark data sets to prove the effectiveness of MOC compared with several proposed schemes.

Keywords: analysis of optimization, artificial intelligence based optimization, optimization for learning and data analysis, global optimization

Procedia PDF Downloads 202
9234 Investigating Geopolymerization Process of Aluminosilicates and its Impact on the Compressive Strength of the Produced Geopolymers

Authors: Heba Fouad, Tarek M. Madkour, Safwan A. Khedr

Abstract:

This paper investigates multiple factors that impact the formation of geopolymers and their compressive strength to be utilized in construction as an environmentally-friendly material. Bentonite and Kaolinite were thermally calcinated at 750 °C to obtain Metabentonite and Metakaolinite with higher reactivity. Both source materials were activated using a solution of sodium hydroxide (NaOH). Thereafter, samples were cured at different temperatures. The samples were analyzed chemically using a host of spectroscopic techniques. The bulk density and compressive strength of the produced Geopolymer pastes were studied. Findings indicate that the ratio of NaOH solution to source material affects the compressive strength, being optimal at 0.54. Moreover, controlled heat curing was proven effective to improve compressive strength. The existence of characteristic Fourier Transform Infrared Spectroscopy (FTIR) peaks at approximately 1020 cm-1 and 460 cm-1 which corresponds to the asymmetric stretching vibration of Si-O-T and bending vibration of Si-O-Si, hence, confirming the formation of the target geopolymer.

Keywords: calcination of metakaolinite, compressive strength, FTIR analysis, geopolymer, green cement

Procedia PDF Downloads 157
9233 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials

Authors: Sheikh Omar Sillah

Abstract:

Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.

Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring

Procedia PDF Downloads 65
9232 The Analysis of the Two Dimensional Huxley Equation Using the Galerkin Method

Authors: Pius W. Molo Chin

Abstract:

Real life problems such as the Huxley equation are always modeled as nonlinear differential equations. These problems need accurate and reliable methods for their solutions. In this paper, we propose a nonstandard finite difference method in time and the Galerkin combined with the compactness method in the space variables. This coupled method, is used to analyze a two dimensional Huxley equation for the existence and uniqueness of the continuous solution of the problem in appropriate spaces to be defined. We proceed to design a numerical scheme consisting of the aforementioned method and show that the scheme is stable. We further show that the stable scheme converges with the rate which is optimal in both the L2 as well as the H1-norms. Furthermore, we show that the scheme replicates the decaying qualities of the exact solution. Numerical experiments are presented with the help of an example to justify the validity of the designed scheme.

Keywords: Huxley equations, non-standard finite difference method, Galerkin method, optimal rate of convergence

Procedia PDF Downloads 209
9231 Analysis of Spamming Threats and Some Possible Solutions for Online Social Networking Sites (OSNS)

Authors: Dilip Singh Sisodia, Shrish Verma

Abstract:

Spamming is the most common issue seen nowadays in the Internet especially in Online Social Networking Sites (like Facebook, Twitter, and Google+ etc.). Spam messages keep wasting Internet bandwidth and the storage space of servers. On social network sites; spammers often disguise themselves by creating fake accounts and hijacking user’s accounts for personal gains. They behave like normal user and they continue to change their spamming strategy. To prevent this, most modern spam-filtering solutions are deployed on the receiver side; they are good at filtering spam for end users. In this paper we are presenting some spamming techniques their behaviour and possible solutions. We have analyzed how Spammers enters into online social networking sites (OSNSs) and how they target it and the techniques they use for it. The five discussed techniques of spamming techniques which are clickjacking, social engineered attacks, cross site scripting, URL shortening, and drive by download. We have used elgg framework for demonstration of some of spamming threats and respective implementation of solutions.

Keywords: online social networking sites, spam, attacks, internet, clickjacking / likejacking, drive-by-download, URL shortening, networking, socially engineered attacks, elgg framework

Procedia PDF Downloads 342
9230 Optimizing E-commerce Retention: A Detailed Study of Machine Learning Techniques for Churn Prediction

Authors: Saurabh Kumar

Abstract:

In the fiercely competitive landscape of e-commerce, understanding and mitigating customer churn has become paramount for sustainable business growth. This paper presents a thorough investigation into the application of machine learning techniques for churn prediction in e-commerce, aiming to provide actionable insights for businesses seeking to enhance customer retention strategies. We conduct a comparative study of various machine learning algorithms, including traditional statistical methods and ensemble techniques, leveraging a rich dataset sourced from Kaggle. Through rigorous evaluation, we assess the predictive performance, interpretability, and scalability of each method, elucidating their respective strengths and limitations in capturing the intricate dynamics of customer churn. We identified the XGBoost classifier to be the best performing. Our findings not only offer practical guidelines for selecting suitable modeling approaches but also contribute to the broader understanding of customer behavior in the e-commerce domain. Ultimately, this research equips businesses with the knowledge and tools necessary to proactively identify and address churn, thereby fostering long-term customer relationships and sustaining competitive advantage.

Keywords: customer churn, e-commerce, machine learning techniques, predictive performance, sustainable business growth

Procedia PDF Downloads 10
9229 Supplier Risk Management: A Multivariate Statistical Modelling and Portfolio Optimization Based Approach for Supplier Delivery Performance Development

Authors: Jiahui Yang, John Quigley, Lesley Walls

Abstract:

In this paper, the authors develop a stochastic model regarding the investment in supplier delivery performance development from a buyer’s perspective. The authors propose a multivariate model through a Multinomial-Dirichlet distribution within an Empirical Bayesian inference framework, representing both the epistemic and aleatory uncertainties in deliveries. A closed form solution is obtained and the lower and upper bound for both optimal investment level and expected profit under uncertainty are derived. The theoretical properties provide decision makers with useful insights regarding supplier delivery performance improvement problems where multiple delivery statuses are involved. The authors also extend the model from a single supplier investment into a supplier portfolio, using a Lagrangian method to obtain a theoretical expression for an optimal investment level and overall expected profit. The model enables a buyer to know how the marginal expected profit/investment level of each supplier changes with respect to the budget and which supplier should be invested in when additional budget is available. An application of this model is illustrated in a simulation study. Overall, the main contribution of this study is to provide an optimal investment decision making framework for supplier development, taking into account multiple delivery statuses as well as multiple projects.

Keywords: decision making, empirical bayesian, portfolio optimization, supplier development, supply chain management

Procedia PDF Downloads 282
9228 Long-Term Results of Coronary Bifurcation Stenting with Drug Eluting Stents

Authors: Piotr Muzyk, Beata Morawiec, Mariusz Opara, Andrzej Tomasik, Brygida Przywara-Chowaniec, Wojciech Jachec, Ewa Nowalany-Kozielska, Damian Kawecki

Abstract:

Background: Coronary bifurcation is one of the most complex lesion in patients with coronary ar-tery disease. Provisional T-stenting is currently one of the recommended techniques. The aim was to assess optimal methods of treatment in the era of drug-eluting stents (DES). Methods: The regis-try consisted of data from 1916 patients treated with coronary percutaneous interventions (PCI) using either first- or second-generation DES. Patients with bifurcation lesion entered the analysis. Major adverse cardiac and cardiovascular events (MACCE) were assessed at one year of follow-up and comprised of death, acute myocardial infarction (AMI), repeated PCI (re-PCI) of target ves-sel and stroke. Results: Of 1916 registry patients, 204 patients (11%) were diagnosed with bifurcation lesion >50% and entered the analysis. The most commonly used technique was provi-sional T-stenting (141 patients, 69%). Optimization with kissing-balloons technique was performed in 45 patients (22%). In 59 patients (29%) second-generation DES was implanted, while in 112 pa-tients (55%), first-generation DES was used. In 33 patients (16%) both types of DES were used. The procedure success rate (TIMI 3 flow) was achieved in 98% of patients. In one-year follow-up, there were 39 MACCE (19%) (9 deaths, 17 AMI, 16 re-PCI and 5 strokes). Provisional T-stenting resulted in similar rate of MACCE to other techniques (16% vs. 5%, p=0.27) and similar occurrence of re-PCI (6% vs. 2%, p=0.78). The results of post-PCI kissing-balloon technique gave equal out-comes with 3% vs. 16% of MACCE in patients in whom no optimization technique was used (p=0.39). The type of implanted DES (second- vs. first-generation) had no influence on MACCE (4% vs 14%, respectively, p=0.12) and re-PCI (1.7% vs. 51% patients, respectively, p=0.28). Con-clusions: The treatment of bifurcation lesions with PCI represent high-risk procedures with high rate of MACCE. Stenting technique, optimization of PCI and the generation of implanted stent should be personalized for each case to balance risk of the procedure. In this setting, the operator experience might be the factor of better outcome, which should be further investigated.

Keywords: coronary bifurcation, drug eluting stents, long-term follow-up, percutaneous coronary interventions

Procedia PDF Downloads 198