Search results for: binodal curve
857 Thermal Cracking Approach Investigation to Improve Biodiesel Properties
Authors: Roghaieh Parvizsedghy, Seyyed Mojtaba Sadrameli
Abstract:
Biodiesel as an alternative diesel fuel is steadily gaining more attention and significance. However, there are some drawbacks while using biodiesel regarding its properties that requires it to be blended with petrol based diesel and/or additives to improve the fuel characteristics. This study analyses thermal cracking as an alternative technology to improve biodiesel characteristics in which, FAME based biodiesel produced by transesterification of castor oil is fed into a continuous thermal cracking reactor at temperatures range of 450-500°C and flowrate range of 20-40 g/hr. Experiments designed by response surface methodology and subsequent statistical studies show that temperature and feed flowrate significantly affect the products yield. Response surfaces were used to study the impact of temperature and flowrate on the product properties. After each experiment, the produced crude bio-oil was distilled and diesel cut was separated. As shorter chain molecules are produced through thermal cracking, the distillation curve of the diesel cut fitted more with petrol based diesel curve in comparison to the biodiesel. Moreover, the produced diesel cut properties adequately pose within property ranges defined by the related standard of petrol based diesel. Cold flow properties, high heating value as the main drawbacks of the biodiesel are improved by this technology. Thermal cracking decreases kinematic viscosity, Flash point and cetane number.Keywords: biodiesel, castor oil, fuel properties, thermal cracking
Procedia PDF Downloads 260856 Half Model Testing for Canard of a Hybrid Buoyant Aircraft
Authors: Anwar U. Haque, Waqar Asrar, Ashraf Ali Omar, Erwin Sulaeman, Jaffer Sayed Mohamed Ali
Abstract:
Due to the interference effects, the intrinsic aerodynamic parameters obtained from the individual component testing are always fundamentally different than those obtained for complete model testing. Consideration and limitation for such testing need to be taken into account in any design work related to the component buildup method. In this paper, the scaled model of a straight rectangular canard of a hybrid buoyant aircraft is tested at 50 m/s in IIUM-LSWT (Low-Speed Wind Tunnel). Model and its attachment with the balance are kept rigid to have results free from the aeroelastic distortion. Based on the velocity profile of the test section’s floor; the height of the model is kept equal to the corresponding boundary layer displacement. Balance measurements provide valuable but limited information of the overall aerodynamic behavior of the model. Zero lift coefficient is obtained at -2.2o and the corresponding drag coefficient was found to be less than that at zero angles of attack. As a part of the validation of low fidelity tool, the plot of lift coefficient plot was verified by the experimental data and except the value of zero lift coefficient, the overall trend has under-predicted the lift coefficient. Based on this comparative study, a correction factor of 1.36 is proposed for lift curve slope obtained from the panel method.Keywords: wind tunnel testing, boundary layer displacement, lift curve slope, canard, aerodynamics
Procedia PDF Downloads 469855 Iron Deficiency and Iron Deficiency Anaemia/Anaemia as a Diagnostic Indicator for Coeliac Disease: A Systematic Review With Meta-Analysis
Authors: Sahar Shams
Abstract:
Coeliac disease (CD) is a widely reported disease particularly in countries with predominant Caucasian populations. It presents with many signs and symptoms including iron deficiency (ID) and iron deficiency anaemia/anaemia (IDA/A). The exact association between ID, IDA/A and CD and how accurate these signs are in diagnosing CD is not fully known. This systematic review was conducted to investigate the accuracy of both ID & IDA/A as a diagnostic indicator for CD and whether it warrants point of care testing. A systematic review was performed looking at studies published in MEDLINE, Embase, Cochrane Library, and Web of Science. QUADAS-2 tool was used to assess risk of bias in each study. ROC curve and forest plots were generated as part of the meta-analysis after data extraction. 16 studies were identified in total, 13 of which were IDA/A studies and 3 ID studies. The prevalence of CD regardless of diagnostic indicator was assumed as 1%. The QUADAS-2 tool indicated most of studies as having high risk of bias. The PPV for CD was higher in those with ID than for those with IDA/A. Meta-analysis showed the overall odds of having CD is 5 times higher in individuals with ID & IDA/A. The ROC curve showed that there is definitely an association between both diagnostic indicators and CD, the association is not a particularly strong one due to great heterogeneity between studies. Whilst an association between IDA/A & ID and coeliac disease was evident, the results were not deemed significant enough to prompt coeliac disease testing in those with IDA/A & ID.Keywords: anemia, iron deficiency anemia, coeliac disease, point of care testing
Procedia PDF Downloads 130854 Comparative Diagnostic Performance of Diffusion-Weighted Imaging Combined With Microcalcifications on Mammography for Discriminating Malignant From Benign Bi-rads 4 Lesions With the Kaiser Score
Authors: Wangxu Xia
Abstract:
BACKGROUND BI-RADS 4 lesions raise the possibility of malignancy that warrant further clinical and radiologic work-up. This study aimed to evaluate the predictive performance of diffusion-weighted imaging(DWI) and microcalcifications on mammography for predicting malignancy of BI-RADS 4 lesions. In addition, the predictive performance of DWI combined with microcalcifications was alsocompared with the Kaiser score. METHODS During January 2021 and June 2023, 144 patients with 178 BI-RADS 4 lesions underwent conventional MRI, DWI, and mammography were included. The lesions were dichotomized intobenign or malignant according to the pathological results from core needle biopsy or surgical mastectomy. DWI was performed with a b value of 0 and 800s/mm2 and analyzed using theapparent diffusion coefficient, and a Kaiser score > 4 was considered to suggest malignancy. Thediagnostic performances for various diagnostic tests were evaluated with the receiver-operatingcharacteristic (ROC) curve. RESULTS The area under the curve (AUC) for DWI was significantly higher than that of the of mammography (0.86 vs 0.71, P<0.001), but was comparable with that of the Kaiser score (0.86 vs 0.84, P=0.58). However, the AUC for DWI combined with mammography was significantly highthan that of the Kaiser score (0.93 vs 0.84, P=0.007). The sensitivity for discriminating malignant from benign BI-RADS 4 lesions was highest at 89% for Kaiser score, but the highest specificity of 83% can be achieved with DWI combined with mammography. CONCLUSION DWI combined with microcalcifications on mammography could discriminate malignant BI-RADS4 lesions from benign ones with a high AUC and specificity. However, Kaiser score had a better sensitivity for discrimination.Keywords: MRI, DWI, mammography, breast disease
Procedia PDF Downloads 59853 Insulin Resistance in Children and Adolescents in Relation to Body Mass Index, Waist Circumference and Body Fat Weight
Authors: E. Vlachopapadopoulou, E. Dikaiakou, E. Anagnostou, I. Panagiotopoulos, E. Kaloumenou, M. Kafetzi, A. Fotinou, S. Michalacos
Abstract:
Aim: To investigate the relation and impact of Body Mass Index (BMI), Waist Circumference (WC) and Body Fat Weight (BFW) on insulin resistance (MATSUDA INDEX < 2.5) in children and adolescents. Methods: Data from 95 overweight and obese children (47 boys and 48 girls) with mean age 10.7 ± 2.2 years were analyzed. ROC analysis was used to investigate the predictive ability of BMI, WC and BFW for insulin resistance and find the optimal cut-offs. The overall performance of the ROC analysis was quantified by computing area under the curve (AUC). Results: ROC curve analysis indicated that the optimal-cut off of WC for the prediction of insulin resistance was 97 cm with sensitivity equal to 75% and specificity equal to 73.1%. AUC was 0.78 (95% CI: 0.63-0.92, p=0.001). The sensitivity and specificity of obesity for the discrimination of participants with insulin resistance from those without insulin resistance were equal to 58.3% and 75%, respectively (AUC=0.67). BFW had a borderline predictive ability for insulin resistance (AUC=0.58, 95% CI: 0.43-0.74, p=0.101). The predictive ability of WC was equivalent with the correspondence predictive ability of BMI (p=0.891). Obese subjects had 4.2 times greater odds for having insulin resistance (95% CI: 1.71-10.30, p < 0.001), while subjects with WC more than 97 had 8.1 times greater odds for having insulin resistance (95% CI: 2.14-30.86, p=0.002). Conclusion: BMI and WC are important clinical factors that have significant clinical relation with insulin resistance in children and adolescents. The cut off of 97 cm for WC can identify children with greater likelihood for insulin resistance.Keywords: body fat weight, body mass index, insulin resistance, obese children, waist circumference
Procedia PDF Downloads 320852 Sensitivity Enhancement in Graphene Based Surface Plasmon Resonance (SPR) Biosensor
Authors: Angad S. Kushwaha, Rajeev Kumar, Monika Srivastava, S. K. Srivastava
Abstract:
A lot of research work is going on in the field of graphene based SPR biosensor. In the conventional SPR based biosensor, graphene is used as a biomolecular recognition element. Graphene adsorbs biomolecules due to carbon based ring structure through sp2 hybridization. The proposed SPR based biosensor configuration will open a new avenue for efficient biosensing by taking the advantage of Graphene and its fascinating nanofabrication properties. In the present study, we have studied an SPR biosensor based on graphene mediated by Zinc Oxide (ZnO) and Gold. In the proposed structure, prism (BK7) base is coated with Zinc Oxide followed by Gold and Graphene. Using the waveguide approach by transfer matrix method, the proposed structure has been investigated theoretically. We have analyzed the reflectance versus incidence angle curve using He-Ne laser of wavelength 632.8 nm. Angle, at which the reflectance is minimized, termed as SPR angle. The shift in SPR angle is responsible for biosensing. From the analysis of reflectivity curve, we have found that there is a shift in SPR angle as the biomolecules get attached on the graphene surface. This graphene layer also enhances the sensitivity of the SPR sensor as compare to the conventional sensor. The sensitivity also increases by increasing the no of graphene layer. So in our proposed biosensor we have found minimum possible reflectivity with optimum level of sensitivity.Keywords: biosensor, sensitivity, surface plasmon resonance, transfer matrix method
Procedia PDF Downloads 417851 Experiments on Weakly-Supervised Learning on Imperfect Data
Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler
Abstract:
Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation
Procedia PDF Downloads 199850 Pharmacokinetic Study of Clarithromycin in Human Female of Pakistani Population
Authors: Atifa Mushtaq, Tanweer Khaliq, Hafiz Alam Sher, Asia Farid, Anila Kanwal, Maliha Sarfraz
Abstract:
The study was designed to assess the various pharmacokinetic parameters of a commercially available clarithromycin Tablet (Klaricid® 250 mg Abbot, Pakistan) in plasma sample of healthy adult female volunteers by applying a rapid, sensitive and accurate HPLC-UV analytical method. The human plasma samples were evaluated by using an isocratic High Performance Liquid Chromatography (HPLC) system of Sykam consisted of a pump with a column C18 column (250×4.6mn, 5µm) UV-detector. The mobile phase comprises of potassium dihydrogen phosphate (50 mM, pH 6.8, contained 0.7% triethylamine), methanol and acetonitrile (30:25:45, v/v/v) was delivered with injection volume of 20µL at flow rate of 1 mL/min. The detection was performed at λmax 275 nm. By applying this method, important pharmacokinetic parameters Cmax, Tmax, Area under curve (AUC), half-life (t1/2), , Volume of distribution (Vd) and Clearance (Cl) were measured. The parameters of pharmacokinetics of clarithromycin were calculated by software (APO) pharmacological analysis. Maximum plasma concentrations Cmax 2.78 ±0.33 µg/ml, time to reach maximum concentration tmax 2.82 ± 0.11 h and Area under curve AUC was 20.14 h.µg/ml. The mean ± SD values obtained for the pharmacokinetic parameters showed a significant difference in pharmacokinetic parameters observed in previous literature which emphasizes the need for dose adjustment of clarithromycin in Pakistani population.Keywords: Pharmacokinetc, Clarothromycin, HPLC, Pakistan
Procedia PDF Downloads 108849 Improved of Elliptic Curves Cryptography over a Ring
Authors: Abdelhakim Chillali, Abdelhamid Tadmori, Muhammed Ziane
Abstract:
In this article we will study the elliptic curve defined over the ring An and we define the mathematical operations of ECC, which provides a high security and advantage for wireless applications compared to other asymmetric key cryptosystem.Keywords: elliptic curves, finite ring, cryptography, study
Procedia PDF Downloads 372848 Tillage and Manure Effects on Water Retention and Van Genuchten Parameters in Western Iran
Authors: Azadeh Safadoust, Ali Akbar Mahboubi, Mohammad Reza Mosaddeghi, Bahram Gharabaghi
Abstract:
A study was conducted to evaluate hydraulic properties of a sandy loam soil and corn (Zea mays L.) crop production under a short-term tillage and manure combinations field experiment carried out in west of Iran. Treatments included composted cattle manure application rates [0, 30, and 60 Mg (dry weight) ha⁻¹] and tillage systems [no-tillage (NT), chisel plowing (CP), and moldboard plowing (MP)] arranged in a split-plot design. Soil water characteristic curve (SWCC) and saturated hydraulic conductivity (Ks) were significantly affected by manure and tillage treatments. At any matric suction, the soil water content was in the order of MP>CP>NT. At all matric suctions, the amount of water retained by the soil increased as manure application rate increased (i.e. 60>30>0 Mg ha⁻¹). Similar to the tillage effects, at high suctions the differences of water retained due to manure addition were less than that at low suctions. The change of SWCC from tillage methods and manure applications may attribute to the change of pore size and aggregate size distributions. Soil Ks was in the order of CP>MP>NT for the first two layers and in the order of MP>CP and NT for the deeper soil layer. The Ks also increased with increasing rates of manure application (i.e. 60>30>0 Mg ha⁻¹). This was due to the increase in the total pore size and continuity.Keywords: corn, manure, saturated hydraulic conductivity, soil water characteristic curve, tillage
Procedia PDF Downloads 78847 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources
Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin
Abstract:
Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization
Procedia PDF Downloads 121846 Dynamic Effects of Energy Consumption, Economic Growth, International Trade and Urbanization on Environmental Degradation in Nigeria
Authors: Abdulkarim Yusuf
Abstract:
Motivation: A crucial but difficult goal for governments and policymakers in Nigeria in recent years has been the sustainability of economic growth. This goal must be accomplished by regulating or lowering greenhouse gas emissions, which calls for switching to a low- or zero-carbon production system. The lack of in-depth empirical studies on the environmental impact of socioeconomic variables on Nigeria and a number of unresolved issues from earlier research is what led to the current study. Objective: This study fills an important empirical gap by investigating the existence of an Environmental Kuznets Curve hypothesis and the long and short-run dynamic impact of socioeconomic variables on ecological sustainability in Nigeria. Data and method: Annual time series data covering the period 1980 to 2020 and the Autoregressive Distributed Lag technique in the presence of structural breaks were adopted for this study. Results: The empirical findings support the existence of the environmental Kuznets curve hypothesis for Nigeria in the long and short run. Energy consumption and total import exacerbate environmental deterioration in the long and short run, whereas total export improves environmental quality in the long and short run. Financial development, which contributed to a conspicuous decrease in the level of environmental destruction in the long run, escalated it in the short run. In contrast, urbanization caused a significant increase in environmental damage in the long run but motivated a decrease in biodiversity loss in the short run. Implications: The government, policymakers, and all energy stakeholders should take additional measures to ensure the implementation and diversification of energy sources to accommodate more renewable energy sources that emit less carbon in order to promote efficiency in Nigeria's production processes and lower carbon emissions. In order to promote the production and trade of environmentally friendly goods, they should also revise and strengthen environmental policies. With affordable, dependable, and sustainable energy use for higher productivity and inclusive growth, Nigeria will be able to achieve its long-term development goals of good health and wellbeing.Keywords: economic growth, energy consumption, environmental degradation, environmental Kuznets curve, urbanization, Nigeria
Procedia PDF Downloads 54845 The Relationship between Human Neutrophil Elastase Levels and Acute Respiratory Distress Syndrome in Patients with Thoracic Trauma
Authors: Wahyu Purnama Putra, Artono Isharanto
Abstract:
Thoracic trauma is trauma that hits the thoracic wall or intrathoracic organs, either due to blunt trauma or sharp trauma. Thoracic trauma often causes impaired ventilation-perfusion due to damage to the lung parenchyma. This results in impaired tissue oxygenation, which is one of the causes of acute respiratory distress syndrome (ARDS). These changes are caused by the release of pro-inflammatory mediators, plasmatic proteins, and proteases into the alveolar space associated with ongoing edema, as well as oxidative products that ultimately result in severe inhibition of the surfactant system. This study aims to predict the incidence of acute respiratory distress syndrome (ARDS) through human neutrophil elastase levels. This study examines the relationship between plasma elastase levels as a predictor of the incidence of ARDS in thoracic trauma patients in Malang. This study is an observational cohort study. Data analysis uses the Pearson correlation test and ROC curve (receiver operating characteristic curve). It can be concluded that there is a significant (p= 0.000, r= -0.988) relationship between elastase levels and BGA-3. If the value of elastase levels is limited to 23.79 ± 3.95, the patient will experience mild ARDS. While if the value of elastase levels is limited to 57.68 ± 18.55, in the future, the patient will experience moderate ARDS. Meanwhile, if the elastase level is between 107.85 ± 5.04, the patient will likely experience severe ARDS. Neutrophil elastase levels correlate with the degree of severity of ARDS incidence.Keywords: ARDS, human neutrophil elastase, severity, thoracic trauma
Procedia PDF Downloads 148844 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems
Authors: Rodolfo Lorbieski, Silvia Modesto Nassar
Abstract:
Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.Keywords: stacking, multi-layers, ensemble, multi-class
Procedia PDF Downloads 269843 Vehicle Maneuverability on Horizontal Curves on Hilly Terrain: A Study on Shillong Highway
Authors: Surendra Choudhary, Sapan Tiwari
Abstract:
The driver has two fundamental duties i) controlling the position of the vehicle along the longitudinal and lateral direction of movement ii) roadway width. Both of these duties are interdependent and are concurrently referred to as two-dimensional driver behavior. One of the main problems facing driver behavior modeling is to identify the parameters for describing the exemplary driving conduct and car maneuver under distinct traffic circumstances. Still, to date, there is no well-accepted theory that can comprehensively model the 2-D driver conduct (longitudinal and lateral). The primary objective of this research is to explore the vehicle's lateral longitudinal behavior in the heterogeneous condition of traffic on horizontal curves as well as the effect of road geometry on dynamic traffic parameters, i.e., car velocity and lateral placement. In this research, with their interrelationship, a thorough assessment of dynamic car parameters, i.e., speed, lateral acceleration, and turn radius. Also, horizontal curve road parameters, i.e., curvature radius, pavement friction, are performed. The dynamic parameters of the various types of car drivers are gathered using a VBOX GPS-based tool with high precision. The connection between dynamic car parameters and curve geometry is created after the removal of noise from the GPS trajectories. The major findings of the research are that car maneuvers with higher than the design limits of speed, acceleration, and lateral deviation on the studied curves of the highway. It can become lethal if the weather changes from dry to wet.Keywords: geometry, maneuverability, terrain, trajectory, VBOX
Procedia PDF Downloads 143842 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction
Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey
Abstract:
In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization
Procedia PDF Downloads 344841 Critical Study on the Sensitivity of Corrosion Fatigue Crack Growth Rate to Cyclic Waveform and Microstructure in Marine Steel
Authors: V. C. Igwemezie, A. N. Mehmanparast
Abstract:
The primary focus of this work is to understand how variations in the microstructure and cyclic waveform affect the corrosion fatigue crack growth (CFCG) in steel, especially in the Paris region of the da/dN vs. ΔK curve. This work is important because it provides fundamental information on the modelling, design, selection, and use of steels for various engineering applications in the marine environment. The corrosion fatigue tests data on normalized and thermomechanical control process (TMCP) ferritic-pearlitic steels by the authors were compared with several studies on different microstructures in the literature. The microstructures of these steels are radically different and general comparative fatigue crack growth resistance performance study on the effect of microstructure in these materials are very scarce and where available are limited to few studies. The results, for purposes of engineering application, in this study show less dependency of fatigue crack growth rate (FCGR) on yield strength, tensile strength, ductility, frequency and stress ratio in the range 0.1 – 0.7. The nature of the steel microstructure appears to be a major factor in determining the rate at which fatigue cracks propagate in the entire da/dN vs. ΔK sigmoidal curve. The study also shows that the sine wave shape is the most damaging fatigue waveform for ferritic-pearlitic steels. This tends to suggest that the test under sine waveform would be a conservative approach, regardless of the waveform for design of engineering structures.Keywords: BS7910, corrosion-fatigue crack growth rate, cyclic waveform, microstructure, steel
Procedia PDF Downloads 155840 Developing Pavement Structural Deterioration Curves
Authors: Gregory Kelly, Gary Chai, Sittampalam Manoharan, Deborah Delaney
Abstract:
A Structural Number (SN) can be calculated for a road pavement from the properties and thicknesses of the surface, base course, sub-base, and subgrade. Historically, the cost of collecting structural data has been very high. Data were initially collected using Benkelman Beams and now by Falling Weight Deflectometer (FWD). The structural strength of pavements weakens over time due to environmental and traffic loading factors, but due to a lack of data, no structural deterioration curve for pavements has been implemented in a Pavement Management System (PMS). International Roughness Index (IRI) is a measure of the road longitudinal profile and has been used as a proxy for a pavement’s structural integrity. This paper offers two conceptual methods to develop Pavement Structural Deterioration Curves (PSDC). Firstly, structural data are grouped in sets by design Equivalent Standard Axles (ESA). An ‘Initial’ SN (ISN), Intermediate SN’s (SNI) and a Terminal SN (TSN), are used to develop the curves. Using FWD data, the ISN is the SN after the pavement is rehabilitated (Financial Accounting ‘Modern Equivalent’). Intermediate SNIs, are SNs other than the ISN and TSN. The TSN was defined as the SN of the pavement when it was approved for pavement rehabilitation. The second method is to use Traffic Speed Deflectometer data (TSD). The road network already divided into road blocks, is grouped by traffic loading. For each traffic loading group, road blocks that have had a recent pavement rehabilitation, are used to calculate the ISN and those planned for pavement rehabilitation to calculate the TSN. The remaining SNs are used to complete the age-based or if available, historical traffic loading-based SNI’s.Keywords: conceptual, pavement structural number, pavement structural deterioration curve, pavement management system
Procedia PDF Downloads 544839 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm
Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali
Abstract:
Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir
Procedia PDF Downloads 265838 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals
Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen
Abstract:
Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.Keywords: chalcogenides, defects, thermoluminescence, trap centers
Procedia PDF Downloads 282837 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules
Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang
Abstract:
Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor
Procedia PDF Downloads 584836 The Neutrophil-to-Lymphocyte Ratio after Surgery for Hip Fracture in a New, Simple, and Objective Score to Predict Postoperative Mortality
Authors: Philippe Dillien, Patrice Forget, Harald Engel, Olivier Cornu, Marc De Kock, Jean Cyr Yombi
Abstract:
Introduction: Hip fracture precedes commonly death in elderly people. Identification of high-risk patients may contribute to target patients in whom optimal management, resource allocation and trials efficiency is needed. The aim of this study is to construct a predictive score of mortality after hip fracture on the basis of the objective prognostic factors available: Neutrophil-to-lymphocyte ratio (NLR), age, and sex. C-Reactive Protein (CRP), is also considered as an alternative to the NLR. Patients and methods: After the IRB approval, we analyzed our prospective database including 286 consecutive patients with hip fracture. A score was constructed combining age (1 point per decade above 74 years), sex (1 point for males), and NLR at postoperative day+5 (1 point if >5). A receiver-operating curve (ROC) curve analysis was performed. Results: From the 286 patients included, 235 were analyzed (72 males and 163 females, 30.6%/69.4%), with a median age of 84 (range: 65 to 102) years, mean NLR values of 6.47+/-6.07. At one year, 82/280 patients died (29.3%). Graphical analysis and log-rank test confirm a highly statistically significant difference (P<0.001). Performance analysis shows an AUC of 0.72 [95%CI 0.65-0.79]. CRP shows no advantage on NLR. Conclusion: We have developed a score based on age, sex and the NLR to predict the risk of mortality at one year in elderly patients after surgery for a hip fracture. After external validation, it may be included in clinical practice as in clinical research to stratify the risk of postoperative mortality.Keywords: neutrophil-to-lymphocyte ratio, hip fracture, postoperative mortality, medical and health sciences
Procedia PDF Downloads 412835 Family of Density Curves of Queensland Soils from Compaction Tests, on a 3D Z-Plane Function of Moisture Content, Saturation, and Air-Void Ratio
Authors: Habib Alehossein, M. S. K. Fernando
Abstract:
Soil density depends on the volume of the voids and the proportion of the water and air in the voids. However, there is a limit to the contraction of the voids at any given compaction energy, whereby additional water is used to reduce the void volume further by lubricating the particles' frictional contacts. Hence, at an optimum moisture content and specific compaction energy, the density of unsaturated soil can be maximized where the void volume is minimum. However, when considering a full compaction curve and permutations and variations of all these components (soil, air, water, and energy), laboratory soil compaction tests can become expensive, time-consuming, and exhausting. Therefore, analytical methods constructed on a few test data can be developed and used to reduce such unnecessary efforts significantly. Concentrating on the compaction testing results, this study discusses the analytical modelling method developed for some fine-grained and coarse-grained soils of Queensland. Soil properties and characteristics, such as full functional compaction curves under various compaction energy conditions, were studied and developed for a few soil types. Using MATLAB, several generic analytical codes were created for this study, covering all possible compaction parameters and results as they occur in a soil mechanics lab. These MATLAB codes produce a family of curves to determine the relationships between the density, moisture content, void ratio, saturation, and compaction energy.Keywords: analytical, MATLAB, modelling, compaction curve, void ratio, saturation, moisture content
Procedia PDF Downloads 90834 Epidemiological and Clinical Characteristics of Five Rare Pathological Subtypes of Hepatocellular Carcinoma
Authors: Xiaoyuan Chen
Abstract:
Background: This study aimed to characterize the epidemiological and clinical features of five rare subtypes of hepatocellular carcinoma (HCC) and to create a competing risk nomogram for predicting cancer-specific survival. Methods: This study used the Surveillance, Epidemiology, and End Results database to analyze the clinicopathological data of 50,218 patients with classic HCC and five rare subtypes (ICD-O-3 Histology Code=8170/3-8175/3) between 2004 and 2018. The annual percent change (APC) was calculated using Joinpoint regression, and a nomogram was developed based on multivariable competing risk survival analyses. The prognostic performance of the nomogram was evaluated using the Akaike information criterion, Bayesian information criterion, C-index, calibration curve, and area under the receiver operating characteristic curve. Decision curve analysis was used to assess the clinical value of the models. Results: The incidence of scirrhous carcinoma showed a decreasing trend (APC=-6.8%, P=0.025), while the morbidity of other rare subtypes remained stable from 2004 to 2018. The incidence-based mortality plateau in all subtypes during the period. Clear cell carcinoma was the most common subtype (n=551, 1.1%), followed by fibrolamellar (n=241, 0.5%), scirrhous (n=82, 0.2%), spindle cell (n=61, 0.1%), and pleomorphic (n=17, ~0%) carcinomas. Patients with fibrolamellar carcinoma were younger and more likely to have non-cirrhotic liver and better prognoses. Scirrhous carcinoma shared almost the same macro clinical characteristics and outcomes as classic HCC. Clear cell carcinoma tended to occur in the Asia-Pacific elderly male population, and more than half of them were large HCC (Size>5cm). Sarcomatoid (including spindle cell and pleomorphic) carcinoma was associated with larger tumor size, poorer differentiation, and more dismal prognoses. The pathological subtype, T stage, M stage, surgery, alpha-fetoprotein, and cancer history were identified as independent predictors in patients with rare subtypes. The nomogram showed good calibration, discrimination, and net benefits in clinical practice. Conclusion: The rare subtypes of HCC had distinct clinicopathological features and biological behaviors compared with classic HCC. Our findings could provide a valuable reference for clinicians. The constructed nomogram could accurately predict prognoses, which is beneficial for individualized management.Keywords: hepatocellular carcinoma, pathological subtype, fibrolamellar carcinoma, scirrhous carcinoma, clear cell carcinoma, spindle cell carcinoma, pleomorphic carcinoma
Procedia PDF Downloads 75833 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories
Authors: Mojtaba Taheri, Saied Reza Ameli
Abstract:
In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty
Procedia PDF Downloads 71832 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer
Authors: Mahya Naghipoor
Abstract:
Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.Keywords: lung cancer, radiomics, computer tomography, mutation
Procedia PDF Downloads 167831 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 452830 Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects
Authors: Lal Hussain, Wajid Aziz, Imtiaz Ahmed Awan, Sharjeel Saeed
Abstract:
Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values<0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.Keywords: electroencephalogram (EEG), multiscale sample entropy (MSE), Mann-Whitney test (MMT), Receiver Operator Curve (ROC), complexity analysis
Procedia PDF Downloads 376829 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy
Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro
Abstract:
Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer
Procedia PDF Downloads 135828 Working Memory Growth from Kindergarten to First Grade: Considering Impulsivity, Parental Discipline Methods and Socioeconomic Status
Authors: Ayse Cobanoglu
Abstract:
Working memory can be defined as a workspace that holds and regulates active information in mind. This study investigates individual changes in children's working memory from kindergarten to first grade. The main purpose of the study is whether parental discipline methods and child impulsive/overactive behaviors affect children's working memory initial status and growth rate, controlling for gender, minority status, and socioeconomic status (SES). A linear growth curve model with the first four waves of the Early Childhood Longitudinal Study-Kindergarten Cohort of 2011 (ECLS-K:2011) is performed to analyze the individual growth of children's working memory longitudinally (N=3915). Results revealed that there is a significant variation among students' initial status in the kindergarten fall semester as well as the growth rate during the first two years of schooling. While minority status, SES, and children's overactive/impulsive behaviors influenced children's initial status, only SES and minority status were significantly associated with the growth rate of working memory. For parental discipline methods, such as giving a warning and ignoring the child's negative behavior, are also negatively associated with initial working memory scores. Following that, students' working memory growth rate is examined, and students with lower SES as well as minorities showed a faster growth pattern during the first two years of schooling. However, the findings of parental disciplinary methods on working memory growth rates were mixed. It can be concluded that schooling helps low-SES minority students to develop their working memory.Keywords: growth curve modeling, impulsive/overactive behaviors, parenting, working memory
Procedia PDF Downloads 135