Search results for: single error upset
5058 Alloy Design of Single Crystal Ni-base Superalloys by Combined Method of Neural Network and CALPHAD
Authors: Mehdi Montakhabrazlighi, Ercan Balikci
Abstract:
The neural network (NN) method is applied to alloy development of single crystal Ni-base Superalloys with low density and improved mechanical strength. A set of 1200 dataset which includes chemical composition of the alloys, applied stress and temperature as inputs and density and time to rupture as outputs is used for training and testing the network. Thermodynamic phase diagram modeling of the screened alloys is performed with Thermocalc software to model the equilibrium phases and also microsegregation in solidification processing. The model is first trained by 80% of the data and the 20% rest is used to test it. Comparing the predicted values and the experimental ones showed that a well-trained network is capable of accurately predicting the density and time to rupture strength of the Ni-base superalloys. Modeling results is used to determine the effect of alloying elements, stress, temperature and gamma-prime phase volume fraction on rupture strength of the Ni-base superalloys. This approach is in line with the materials genome initiative and integrated computed materials engineering approaches promoted recently with the aim of reducing the cost and time for development of new alloys for critical aerospace components. This work has been funded by TUBITAK under grant number 112M783.Keywords: neural network, rupture strength, superalloy, thermocalc
Procedia PDF Downloads 3125057 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety
Authors: Atheer Al-Nuaimi, Harry Evdorides
Abstract:
Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety
Procedia PDF Downloads 2395056 Numerical Evolution Methods of Rational Form for Diffusion Equations
Authors: Said Algarni
Abstract:
The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs
Procedia PDF Downloads 2965055 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 3025054 Single Layer Carbon Nanotubes Array as an Efficient Membrane for Desalination: A Molecular Dynamics Study
Authors: Elisa Y. M. Ang, Teng Yong Ng, Jingjie Yeo, Rongming Lin, Zishun Liu, K. R. Geethalakshmi
Abstract:
By stacking carbon nanotubes (CNT) one on top of another, single layer CNT arrays can perform water-salt separation with ultra-high permeability and selectivity. Such outer-wall CNT slit membrane is named as the transverse flow CNT membrane. By adjusting the slit size between neighboring CNTs, the membrane can be configured to sieve out different solutes, right down to the separation of monovalent salt ions from water. Molecular dynamics (MD) simulation results show that the permeability of transverse flow CNT membrane is more than two times that of conventional axial-flow CNT membranes, and orders of magnitude higher than current reverse osmosis membrane. In addition, by carrying out MD simulations with different CNT size, it was observed that the variance in desalination performance with CNT size is small. This insensitivity of the transverse flow CNT membrane’s performance to CNT size is a distinct advantage over axial flow CNT membrane designs. Not only does the membrane operate well under constant pressure desalination operation, but MD simulations further indicate that oscillatory operation can further enhance the membrane’s desalination performance, making it suitable for operation such as electrodialysis reversal. While there are still challenges that need to be overcome, particularly on the physical fabrication of such membrane, it is hope that this versatile membrane design can bring the idea of using low dimensional structures for desalination closer to reality.Keywords: carbon nanotubes, membrane desalination, transverse flow carbon nanotube membrane, molecular dynamics
Procedia PDF Downloads 1935053 Modified Single-Folded Potentials for the Alpha-²⁴Mg and Alpha-²⁸Si Elastic Scattering
Authors: M. N. A. Abdullah, Pritha Roy, R. R. Shil, D. R. Sarker
Abstract:
Alpha-nucleus interaction is obscured because it produces enhanced cross-sections at large scattering angles known as anomaly in large angle scattering (ALAS). ALAS is prominent in the elastic scattering of α-particles as well as in non-elastic processes involving α-particles for incident energies up to 50 MeV and for targets of mass A ≤ 50. The Woods-Saxon type of optical model potential fails to describe the processes in a consistent manner. Folded potential is a good candidate and often used to construct the potential which is derived from the microscopic as well as semi-microscopic folding calculations. The present work reports the analyses of the elastic scattering of α-particles from ²⁴Mg and ²⁸Si at Eα=22-100 MeV and 14.4-120 MeV incident energies respectively in terms of the modified single-folded (MSF) potential. To derive the MSF potential, we take the view that the nucleons in the target nuclei ²⁴Mg and ²⁸Si are primarily in α-like clusters and the rest of the time in unclustered nucleonic configuration. The MSF potential, found in this study, does not need any renormalization over the whole range of incident α energies, and the renormalization factor has been found to be exactly 1 for both the targets. The best-fit parameters yield 4Aα = 21 and AN = 3 for α-²⁴Mg potential, and 4Aα = 26 and AN = 2 for α-²⁸Si potential in time-average pictures. The root-mean-square radii of both ²⁴Mg and ²⁸Si are also deduced, and the results obtained from this work agree well with the outcomes of other studies.Keywords: elastic scattering, optical model, folded potential, renormalization
Procedia PDF Downloads 2225052 Effects of the Coagulation Bath and Reduction Process on SO2 Adsorption Capacity of Graphene Oxide Fiber
Authors: Özge Alptoğa, Nuray Uçar, Nilgün Karatepe Yavuz, Ayşen Önen
Abstract:
Sulfur dioxide (SO2) is a very toxic air pollutant gas and it causes the greenhouse effect, photochemical smog, and acid rain, which threaten human health severely. Thus, the capture of SO2 gas is very important for the environment. Graphene which is two-dimensional material has excellent mechanical, chemical, thermal properties, and many application areas such as energy storage devices, gas adsorption, sensing devices, and optical electronics. Further, graphene oxide (GO) is examined as a good adsorbent because of its important features such as functional groups (epoxy, carboxyl and hydroxyl) on the surface and layered structure. The SO2 adsorption properties of the fibers are usually investigated on carbon fibers. In this study, potential adsorption capacity of GO fibers was researched. GO dispersion was first obtained with Hummers’ method from graphite, and then GO fibers were obtained via wet spinning process. These fibers were converted into a disc shape, dried, and then subjected to SO2 gas adsorption test. The SO2 gas adsorption capacity of GO fiber discs was investigated in the fields of utilization of different coagulation baths and reduction by hydrazine hydrate. As coagulation baths, single and triple baths were used. In single bath, only ethanol and CaCl2 (calcium chloride) salt were added. In triple bath, each bath has a different concentration of water/ethanol and CaCl2 salt, and the disc obtained from triple bath has been called as reference disk. The fibers which were produced with single bath were flexible and rough, and the analyses show that they had higher SO2 adsorption capacity than triple bath fibers (reference disk). However, the reduction process did not increase the adsorption capacity, because the SEM images showed that the layers and uniform structure in the fiber form were damaged, and reduction decreased the functional groups which SO2 will be attached. Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction (XRD) analyzes were performed on the fibers and discs, and the effects on the results were interpreted. In the future applications of the study, it is aimed that subjects such as pH and additives will be examined.Keywords: coagulation bath, graphene oxide fiber, reduction, SO2 gas adsorption
Procedia PDF Downloads 3595051 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device
Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin
Abstract:
Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer
Procedia PDF Downloads 555050 Success of Trabeculectomy: May Not Always Depend on Mitomycin C
Authors: Sushma Tejwani, Shoruba Dinakaran, Rupa Rokhade, K. Bhujang Shetty
Abstract:
Introduction and aim: One of the major causes for failure of trabeculectomy is fibrosis and scarring of subconjunctival tissue around the bleb, and hence intra operative usage of anti-fibrotic agents like Mitomycin C (MMC) has become very popular. However, the long term effects of MMC like thin, avascular bleb, hypotony, bleb leaks and late onset endophthalmitis cannot be ignored, and may preclude its usage in routine trabeculectomy. In this particular study we aim to study the outcomes of trabeculectomy with and without MMC in uncomplicated glaucoma patients. Methods: Retrospective study of series of patients that underwent trabeculectomy with or without cataract surgery in glaucoma department of a tertiary eye care centre by a single surgeon for primary open angle glaucoma (POAG), angle closure glaucoma (PACG), Pseudoexfoliation glaucoma (PXF glaucoma). Patients with secondary glaucoma, juvenile and congenital glaucoma were excluded; also patients undergoing second trabeculectomy were excluded. The outcomes were studied in terms of IOP control at 1 month, 6 months, and 1 year and were analyzed separately for surgical outcomes with and without MMC. Success was considered if IOP was < 16 mmHg on applanation tonometry. Further, the necessity of medication, 5 fluorouracil (5FU) postoperative injections, needling post operatively was noted. Results: Eighty nine patient’s medical records were reviewed, of which 58 patients had undergone trabeculectomy without MMC and 31 with MMC. Mean age was 62.4 (95%CI 61- 64), 34 were females and 55 males. MMC group (n=31): Preoperative mean IOP was 21.1mmHg (95% CI: 17.6 -24.6), and 22 patients had IOP > 16. Three out of 33 patients were on single medication and rests were on multiple drugs. At 1 month (n=27) mean IOP was 12.4 mmHg (CI: 10.7-14), and 31/33 had success. At 6 months (n=18) mean IOP was 13mmHg (CI: 10.3-14.6) and 16/18 had good outcome, however at 1 year only 11 patients were available for follow up and 91% (10/11) had success. Overall, 3 patients required medication and one patient required postoperative injection of 5 FU. No MMC group (n=58): Preoperative mean IOP was 21.9 mmHg (CI: 19.8-24.2), and 42 had IOP > 16 mmHg. 12 out of 58 patients were on single medication and rests were on multiple drugs. At 1 month (n=52) mean IOP was14.6mmHg (CI: 13.2-15.9), and 45/ 58 had IOP < 16mmHg. At 6 months (n=31) mean IOP was 13.5 mmHg (CI: 11.9-15.2) and 26/31 had success, however at 1 year only 23 patients came for follow up and of these 87% (20/23) patients had success. Overall, 1 patient required needling, 5 required 5 FU injections and 5 patients required medication. The success rates at each follow up visit were not significantly different in both the groups. Conclusion: Intra-operative MMC usage may not be required in all patients undergoing trabeculectomy, and the ones without MMC also have fairly good outcomes in primary glaucoma.Keywords: glaucoma filtration surgery, mitomycin C, outcomes of trabeculectomy, wound modulation
Procedia PDF Downloads 2735049 Rapid Detection and Differentiation of Camel Pox, Contagious Ecthyma and Papilloma Viruses in Clinical Samples of Camels Using a Multiplex PCR
Authors: A. I. Khalafalla, K. A. Al-Busada, I. M. El-Sabagh
Abstract:
Pox and pox-like diseases of camels are a group of exanthematous skin conditions that have become increasingly important economically. They may be caused by three distinct viruses: camelpox virus (CMPV), camel contagious ecthyma virus (CCEV) and camel papillomavirus (CAPV). These diseases are difficult to differentiate based on clinical presentation in disease outbreaks. Molecular methods such as PCR targeting species-specific genes have been developed and used to identify CMPV and CCEV, but not simultaneously in a single tube. Recently, multiplex PCR has gained reputation as a convenient diagnostic method with cost- and time–saving benefits. In the present communication, we describe the development, optimization and validation a multiplex PCR assays able to detect simultaneously the genome of the three viruses in one single test allowing for rapid and efficient molecular diagnosis. The assay was developed based on the evaluation and combination of published and new primer sets, and was applied to the detection of 110 tissue samples. The method showed high sensitivity, and the specificity was confirmed by PCR-product sequencing. In conclusion, this rapid, sensitive and specific assay is considered a useful method for identifying three important viruses in specimens from camels and as part of a molecular diagnostic regime.Keywords: multiplex PCR, diagnosis, pox and pox-like diseases, camels
Procedia PDF Downloads 4635048 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 665047 Synthesis, Crystallography and Anti-TB Activity of Substituted Benzothiazole Analogues
Authors: Katharigatta N. Venugopala, Melendhran Pillay, Bander E. Al-Dhubiab
Abstract:
Tuberculosis (TB) infection is caused mainly by Mycobacterium tuberculosis (MTB) and it is one of the most threatening and wide spread infectious diseases in the world. Benzothiazole derivatives are found to have diverse chemical reactivity and broad spectrum of pharmacological activity. Some of the important pharmacological activities shown by the benzothiazole analogues are antitumor, anti-inflammatory, antimicrobial, anti-tubercular, anti-leishmanial, anticonvulsant and anti-HIV properties. Keeping all these facts in mind in the present investigation it was envisaged to synthesize a series of novel {2-(benzo[d]-thiazol-2-yl-methoxy)-substitutedaryl}-(substitutedaryl)-methanones (4a-f) and characterize by IR, NMR (1H and 13C), HRMS and single crystal x-ray studies. The title compounds are investigated for in vitro anti-tubercular activity against two TB strains such as H37Rv (ATCC 25177) and MDR-MTB (multi drug resistant MTB resistant to Isoniazid, Rifampicin and Ethambutol) by agar diffusion method. Among the synthesized compounds in the series, test compound {2-(benzo[d]thiazol-2-yl-methoxy)-5-fluorophenyl}-(4-chlorophenyl)-methanone (2c) was found to exhibit significant activity with MICs of 1 µg/mL and 2 µg/mL against H37Rv and MDR-MTB, respectively when compared to standard drugs. Single crystal x-ray studies was used to study intra and intermolecular interactions, including polymorphism behavior of the test compounds, but none of the compounds exhibited polymorphism behavior.Keywords: benzothiazole analogues, characterization, crystallography, anti-TB activity
Procedia PDF Downloads 2805046 Problems concerning Legal Regulation of Electronic Governance in Georgia
Authors: Giga Phartenadze
Abstract:
In the legal framework of regulation of electronic governance, those norms are considered which include measures for improvement of functions of public institutions and a complex of actions for raising their standard such as websites of public institutions, online services, some forms of internet interactions and higher level of internet services. An important legal basis for electronic governance in Georgia is Georgian Law about Electronic Communications which defines legal and economic basis for utilizing electronic communication systems in Georgia. As for single electronic basis for e-governance regulation, it can be said that it does not exist at all. The official websites of public institutions do not have standards for proactive spreading of information. At the same time, there is no common legal norm which would make all public institutions have an official website for public relations, accountability, publicity, and raising information quality. Electronic governance in Georgia needs comprehensive legal regulation. Public administration in electronic form is on the initial stage of development. Currently existing legal basis has a low quality for public institutions and officials as well as citizens and business. Services of e-involvement and e-consultation have also low quality. So far there is no established legal framework for e-governance. Therefore, a single legislative system of e-governance should be created which will help develop effective, comprehensive and multi component electronic systems in the country (central, regional, local levels). Such comprehensive legal framework will provide relevant technological, institutional, and informational conditions.Keywords: law, e-government, public administration, Georgia
Procedia PDF Downloads 3215045 Causes Analysis of Vacuum Consolidation Failure to Soft Foundation Filled by Newly Dredged Mud
Authors: Bao Shu-Feng, Lou Yan, Dong Zhi-Liang, Mo Hai-Hong, Chen Ping-Shan
Abstract:
For soft foundation filled by newly dredged mud, after improved by Vacuum Preloading Technology (VPT), the soil strength was increased only a little, the effective improved depth was small, and the ground bearing capacity is still low. To analyze the causes in depth, it was conducted in laboratory of several comparative single well model experiments of VPT. It was concluded: (1) it mainly caused serious clogging problem and poor drainage performance in vertical drains of high content of fine soil particles and strong hydrophilic minerals in dredged mud, too fast loading rate at the early stage of vacuum preloading (namely rapidly reaching-80kPa) and too small characteristic opening size of the filter of the existed vertical drains; (2) it commonly reduced the drainage efficiency of drainage system, in turn weaken vacuum pressure in soils and soil improvement effect of the greater partial loss and friction loss of vacuum pressure caused by larger curvature of vertical drains and larger transfer resistance of vacuum pressure in horizontal drain.Keywords: newly dredged mud, single well model experiments of vacuum preloading technology, poor drainage performance of vertical drains, poor soil improvement effect, causes analysis
Procedia PDF Downloads 2855044 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 1195043 Dynamics of Chirped RZ Modulation Format in GEPON Fiber to the Home (FTTH) Network
Authors: Anurag Sharma, Manoj Kumar, Ashima, Sooraj Parkash
Abstract:
The work in this paper presents simulative comparison for different modulation formats such as NRZ, Manchester and CRZ in a 100 subscribers at 5 Gbps bit rate Gigabit Ethernet Passive Optical Network (GEPON) FTTH network. It is observed from the simulation results that the CRZ modulation format is best suited for the designed system. A link design for 1:100 splitter is used as Passive Optical Network (PON) element which creates communication between central offices to different users. The Bit Error Rate (BER) is found to be 2.8535e-10 at 5 Gbit/s systems for CRZ modulation format.Keywords: PON , FTTH, OLT, ONU, CO, GEPON
Procedia PDF Downloads 7035042 Semilocal Convergence of a Three Step Fifth Order Iterative Method under Hölder Continuity Condition in Banach Spaces
Authors: Ramandeep Behl, Prashanth Maroju, S. S. Motsa
Abstract:
In this paper, we study the semilocal convergence of a fifth order iterative method using recurrence relation under the assumption that first order Fréchet derivative satisfies the Hölder condition. Also, we calculate the R-order of convergence and provide some a priori error bounds. Based on this, we give existence and uniqueness region of the solution for a nonlinear Hammerstein integral equation of the second kind.Keywords: Holder continuity condition, Frechet derivative, fifth order convergence, recurrence relations
Procedia PDF Downloads 6115041 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 895040 Exploring Error-Minimization Protocols for Upper-Limb Function During Activities of Daily Life in Chronic Stroke Patients
Authors: M. A. Riurean, S. Heijnen, C. A. Knott, J. Makinde, D. Gotti, J. VD. Kamp
Abstract:
Objectives: The current study is done in preparation for a randomized controlled study investigating the effects of an implicit motor learning protocol implemented using an extension-supporting glove. It will explore different protocols to find out which is preferred when studying motor learn-ing in the chronic stroke population that struggles with hand spasticity. Design: This exploratory study will follow 24 individuals who have a chronic stroke (> 6 months) during their usual care journey. We will record the results of two 9-Hole Peg Tests (9HPT) done during their therapy ses-sions with a physiotherapist or in their home before and after 4 weeks of them wearing an exten-sion-supporting glove used to employ the to-be-studied protocols. The participants will wear the glove 3 times/week for one hour while performing their activities of daily living and record the times they wore it in a diary. Their experience will be monitored through telecommunication once every week. Subjects: Individuals that have had a stroke at least 6 months prior to participation, hand spasticity measured on the modified Ashworth Scale of maximum 3, and finger flexion motor control measured on the Motricity Index of at least 19/33. Exclusion criteria: extreme hemi-neglect. Methods: The participants will be randomly divided into 3 groups: one group using the glove in a pre-set way of decreasing support (implicit motor learning), one group using the glove in a self-controlled way of decreasing support (autonomous motor learning), and the third using the glove with constant support (as control). Before and after the 4-week period, there will be an intake session and a post-assessment session. Analysis: We will compare the results of the two 9HPTs to check whether the protocols were effective. Furthermore, we will compare the results between the three groups to find the preferred one. A qualitative analysis will be run of the experience of participants throughout the 4-week period. Expected results: We expect that the group using the implicit learning protocol will show superior results.Keywords: implicit learning, hand spasticity, stroke, error minimization, motor task
Procedia PDF Downloads 575039 Measurement of Turbulence with PITOT Static Tube in Low Speed Subsonic Wind Tunnel
Authors: Gopikrishnan, Bharathiraja, Boopalan, Jensin Joshua
Abstract:
The Pitot static tube has proven their values and practicability in measuring velocity of fluids for many years. With the aim of extensive usage of such Pitot tube systems, one of the major enabling technologies is to use the design and fabricate a high sensitive pitot tube for the purpose of calibration of the subsonic wind tunnel. Calibration of wind tunnel is carried out by using different instruments to measure variety of parameters. Using too many instruments inside the tunnel may not only affect the fluid flow but also lead to drag or losses. So, it is essential to replace the different system with a single system that would give all the required information. This model of high sensitive Pitot tube has been designed to ease the calibration process. It minimizes the use of different instruments and this single system is capable of calibrating the wind tunnel test section. This Pitot static tube is completely digitalized and so that the velocity data`s can be collected directly from the instrument. Since the turbulence factors are dependent on velocity, the data’s that are collected from the pitot static tube are then processed and the level of turbulence in the fluid flow is calculated. It is also capable of measuring the pressure distribution inside the wind tunnel and the flow angularity of the fluid. Thus, the well-designed high sensitive Pitot static tube is utilized in calibrating the tunnel and also for the measurement of turbulence.Keywords: pitot static tube, turbulence, wind tunnel, velocity
Procedia PDF Downloads 5255038 Effect of Fabrication Errors on High Frequency Filter Circuits
Authors: Wesam Ali
Abstract:
This paper provides useful guidelines to the circuit designers on the magnitude of fabrication errors in multilayer millimeter-wave components that are acceptable and presents data not previously reported in the literature. A particularly significant error that was quantified was that of skew between conductors on different layers, where it was found that a skew angle of only 0.1° resulted in very significant changes in bandwidth and insertion loss. The work was supported by a detailed investigation on a 35GHz, multilayer edge-coupled band-pass filter, which was fabricated on alumina substrates using photoimageable thick film process.Keywords: fabrication errors, multilayer, high frequency band, photoimagable technology
Procedia PDF Downloads 4725037 Analysis of the Inverse Kinematics for 5 DOF Robot Arm Using D-H Parameters
Authors: Apurva Patil, Maithilee Kulkarni, Ashay Aswale
Abstract:
This paper proposes an algorithm to develop the kinematic model of a 5 DOF robot arm. The formulation of the problem is based on finding the D-H parameters of the arm. Brute Force iterative method is employed to solve the system of non linear equations. The focus of the paper is to obtain the accurate solutions by reducing the root mean square error. The result obtained will be implemented to grip the objects. The trajectories followed by the end effector for the required workspace coordinates are plotted. The methodology used here can be used in solving the problem for any other kinematic chain of up to six DOF.Keywords: 5 DOF robot arm, D-H parameters, inverse kinematics, iterative method, trajectories
Procedia PDF Downloads 2015036 Ten Basic Exercises of Muay Thai Chaiya on Balance and Strength in Male Older Adults
Authors: K. Thawichai, R. Pornthep
Abstract:
This study examined the effects of ten basic exercises of Muay Thai Chaiya training for balance and strength in male older adults. Thirty male older adult volunteer from Thayang elderly clubs, Thayang, Petchaburi, Thailand. All participants were randomly assigned to two groups a training group and a control group. The training group (n=15) participated in eight week training program of ten basic exercises of Muay Thai Chaiya training and not to change or increase another exercise during of the study. In the control group, (n=15) did not participate in ten basic exercises of Muay Thai Chaiya training. Both groups were tested before and after eight weeks of the study period on balance in terms of single leg stance with eyes closed and strength in terms of the thirty second chair stand. The data of the study show that the participants of the training group perform significantly different higher scores in single leg stance with eyes closed and thirty second chair stand than the participants in the control group. The results of this study suggested that ten basic exercises of Muay Thai Chaiya training can use to improve balance and strength in male older adults.Keywords: balance, strength, Muay Thai Chaiya, older adults
Procedia PDF Downloads 4555035 Nonlinear Observer Canonical Form for Genetic Regulation Process
Authors: Bououden Soraya
Abstract:
This paper aims to study the existence of the change of coordinates which permits to transform a class of nonlinear dynamical systems into the so-called nonlinear observer canonical form (NOCF). Moreover, an algorithm to construct such a change of coordinates is given. Based on this form, we can design an observer with a linear error dynamic. This enables us to estimate the state of a nonlinear dynamical system. A concrete example (biological model) is provided to illustrate the feasibility of the proposed results.Keywords: nonlinear observer canonical form, observer, design, gene regulation, gene expression
Procedia PDF Downloads 4315034 Human Factors Issues and Measures in Advanced NPPs
Authors: Jun Su Ha
Abstract:
Various advanced technologies will be adopted in Advanced Control Rooms (ACRs) of advanced Nuclear Power Plants (NPPs), which is thought to increase operators’ performance. However, potential human factors issues coupled with digital technologies might be troublesome. Human factors issues in ACRs are identified and strategies (or countermeasures) for evaluating and analyzing each of issues are addressed in this study.Keywords: advanced control room, human factor issues, human performance, human error, nuclear power plant
Procedia PDF Downloads 4685033 Study on Runoff Allocation Responsibilities of Different Land Uses in a Single Catchment Area
Authors: Chuan-Ming Tung, Jin-Cheng Fu, Chia-En Feng
Abstract:
In recent years, the rapid development of urban land in Taiwan has led to the constant increase of the areas of impervious surface, which has increased the risk of waterlogging during heavy rainfall. Therefore, in recent years, promoting runoff allocation responsibilities has often been used as a means of reducing regional flooding. In this study, the single catchment area covering both urban and rural land as the study area is discussed. Based on Storm Water Management Model, urban and rural land in a single catchment area was explored to develop the runoff allocation responsibilities according to their respective control regulation on land use. The impacts of runoff increment and reduction in sub-catchment area were studied to understand the impact of highly developed urban land on the reduction of flood risk of rural land at the back end. The results showed that the rainfall with 1 hour short delay of 2 years, 5 years, 10 years, and 25 years return period. If the study area was fully developed, the peak discharge at the outlet would increase by 24.46% -22.97% without runoff allocation responsibilities. The front-end urban land would increase runoff from back-end of rural land by 76.19% -46.51%. However, if runoff allocation responsibilities were carried out in the study area, the peak discharge could be reduced by 58.38-63.08%, which could make the front-end to reduce 54.05% -23.81% of the peak flow to the back-end. In addition, the researchers found that if it was seen from the perspective of runoff allocation responsibilities of per unit area, the residential area of urban land would benefit from the relevant laws and regulations of the urban system, which would have a better effect of reducing flood than the residential land in rural land. For rural land, the development scale of residential land was generally small, which made the effect of flood reduction better than that of industrial land. Agricultural land requires a large area of land, resulting in the lowest share of the flow per unit area. From the point of the planners, this study suggests that for the rural land around the city, its responsibility should be assigned to share the runoff. And setting up rain water storage facilities in the same way as urban land, can also take stock of agricultural land resources to increase the ridge of field for flood storage, in order to improve regional disaster reduction capacity and resilience.Keywords: runoff allocation responsibilities, land use, flood mitigation, SWMM
Procedia PDF Downloads 1025032 The Routine Use of a Negative Pressure Incision Management System in Vascular Surgery: A Case Series
Authors: Hansraj Bookun, Angela Tan, Rachel Xuan, Linheng Zhao, Kejia Wang, Animesh Singla, David Kim, Christopher Loupos
Abstract:
Introduction: Incisional wound complications in vascular surgery patients represent a significant clinical and econometric burden of morbidity and mortality. The objective of this study was to trial the feasibility of applying the Prevena negative pressure incision management system as a routine dressing in patients who had undergone arterial surgery. Conventionally, Prevena has been applied to groin incisions, but this study features applications on multiple wound sites such as the thigh or major amputation stumps. Method: This was a cross-sectional observational, single-centre case series of 12 patients who had undergone major vascular surgery. Their wounds were managed with the Prevena system being applied either intra-operatively or on the first post-operative day. Demographic and operative details were collated as well as the length of stay and complication rates. Results: There were 9 males (75%) with mean age of 66 years and the comorbid burden was as follows: ischaemic heart disease (92%), diabetes (42%), hypertension (100%), stage 4 or greater kidney impairment (17%) and current or ex-smoking (83%). The main indications were acute ischaemia (33%), claudication (25%), and gangrene (17%). There were single instances of an occluded popliteal artery aneurysm, diabetic foot infection, and rest pain. The majority of patients (50%) had hybrid operations with iliofemoral endarterectomies, patch arterioplasties, and further peripheral endovascular treatment. There were 4 complex arterial bypass operations and 2 major amputations. The mean length of stay was 17 ± 10 days, with a range of 4 to 35 days. A single complication, in the form of a lymphocoele, was encountered in the context of an iliofemoral endarterectomy and patch arterioplasty. This was managed conservatively. There were no deaths. Discussion: The Prevena wound management system shows that in conjunction with safe vascular surgery, absolute wound complication rates remain low and that it remains a valuable adjunct in the treatment of vasculopaths.Keywords: wound care, negative pressure, vascular surgery, closed incision
Procedia PDF Downloads 1365031 Implementation of a Lattice Boltzmann Method for Pulsatile Flow with Moment Based Boundary Condition
Authors: Zainab A. Bu Sinnah, David I. Graham
Abstract:
The Lattice Boltzmann Method has been developed and used to simulate both steady and unsteady fluid flow problems such as turbulent flows, multiphase flow and flows in the vascular system. As an example, the study of blood flow and its properties can give a greater understanding of atherosclerosis and the flow parameters which influence this phenomenon. The blood flow in the vascular system is driven by a pulsating pressure gradient which is produced by the heart. As a very simple model of this, we simulate plane channel flow under periodic forcing. This pulsatile flow is essentially the standard Poiseuille flow except that the flow is driven by the periodic forcing term. Moment boundary conditions, where various moments of the particle distribution function are specified, are applied at solid walls. We used a second-order single relaxation time model and investigated grid convergence using two distinct approaches. In the first approach, we fixed both Reynolds and Womersley numbers and varied relaxation time with grid size. In the second approach, we fixed the Womersley number and relaxation time. The expected second-order convergence was obtained for the second approach. For the first approach, however, the numerical method converged, but not necessarily to the appropriate analytical result. An explanation is given for these observations.Keywords: Lattice Boltzmann method, single relaxation time, pulsatile flow, moment based boundary condition
Procedia PDF Downloads 2305030 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches
Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli
Abstract:
Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR
Procedia PDF Downloads 1045029 Use of Nanosensors in Detection and Treatment of HIV
Authors: Sayed Obeidullah Abrar
Abstract:
Nanosensor is the combination of two terms nanoparticles and sensors. These are chemical or physical sensor constructed using nanoscale components, usually microscopic or submicroscopic in size. These sensors are very sensitive and can detect single virus particle or even very low concentrations of substances that could be potentially harmful. Nanosensors have a large scope of research especially in the field of medical sciences, military applications, pharmaceuticals etc.Keywords: HIV/AIDS, nanosensors, DNA, RNA
Procedia PDF Downloads 297