Search results for: first-order reversal curve method
19221 Constant Order Predictor Corrector Method for the Solution of Modeled Problems of First Order IVPs of ODEs
Authors: A. A. James, A. O. Adesanya, M. R. Odekunle, D. G. Yakubu
Abstract:
This paper examines the development of one step, five hybrid point method for the solution of first order initial value problems. We adopted the method of collocation and interpolation of power series approximate solution to generate a continuous linear multistep method. The continuous linear multistep method was evaluated at selected grid points to give the discrete linear multistep method. The method was implemented using a constant order predictor of order seven over an overlapping interval. The basic properties of the derived corrector was investigated and found to be zero stable, consistent and convergent. The region of absolute stability was also investigated. The method was tested on some numerical experiments and found to compete favorably with the existing methods.Keywords: interpolation, approximate solution, collocation, differential system, half step, converges, block method, efficiency
Procedia PDF Downloads 33719220 Canada's "Flattened Curve": A Geospatail Temporal Analysis of Canada's Amelioration of The Sars-Cov-2 Pandemic Through Coordinated Government Intervention
Authors: John Ahluwalia
Abstract:
As an affluent first-world nation, Canada took swift and comprehensive action during the outbreak of the SARS-CoV-2 (COVID-19) pandemic compared to other countries in the same socio-economic cohort. The United States has stumbled to overcome obstacles most developed nations have faced, which has led to significantly more per capita cases and deaths. The initial outbreaks of COVID-19 occurred in the US and Canada within days of each other and posed similar potentially catastrophic threats to public health, the economy, and governmental stability. On a macro level, events that take place in the US have a direct impact on Canada. For example, both countries tend to enter and exit economic recessions at approximately the same time, they are each other’s largest trading partners, and their currencies are inexorably linked. Variables intrinsic to Canada’s national infrastructure have been instrumental in the country’s efforts to flatten the curve of COVID-19 cases and deaths. Canada’s coordinated multi-level governmental effort has allowed it to create and enforce policies related to COVID-19 at both the national and provincial levels. Canada’s policy of universal health care is another variable. Health care and public health measures are enforced on a provincial level, and it is within each province’s jurisdiction to dictate standards for public safety based on scientific evidence. Rather than introducing confusion and the possibility of competition for resources such as PPE and vaccines, Canada’s multi-level chain of government authority has provided consistent policies supporting national public health and local delivery of medical care. This paper will demonstrate that the coordinated efforts on provincial and federal levels have been the linchpin in Canada’s relative success in containing the deadly spread of the COVID-19 virus.Keywords: COVID-19, canada, GIS, geospatial analysis
Procedia PDF Downloads 6919219 Analysis of Artificial Hip Joint Using Finite Element Method
Authors: Syed Zameer, Mohamed Haneef
Abstract:
Hip joint plays very important role in human beings as it takes up the whole body forces generated due to various activities. These loads are repetitive and fluctuating depending on the activities such as standing, sitting, jogging, stair casing, climbing, etc. which may lead to failure of Hip joint. Hip joint modification and replacement are common in old aged persons as well as younger persons. In this research study static and Fatigue analysis of Hip joint model was carried out using finite element software ANSYS. Stress distribution obtained from result of static analysis, material properties and S-N curve data of fabricated Ultra High molecular weight polyethylene / 50 wt% short E glass fibres + 40 wt% TiO2 Polymer matrix composites specimens were used to estimate fatigue life of Hip joint using stiffness Degradation model for polymer matrix composites. The stress distribution obtained from static analysis was found to be within the acceptable range.The factor of safety calculated from linear Palmgren linear damage rule is less than one, which indicates the component is safe under the design.Keywords: hip joint, polymer matrix composite, static analysis, fatigue analysis, stress life approach
Procedia PDF Downloads 35619218 Life Cycle Cost Evaluation of Structures Retrofitted with Damped Cable System
Authors: Asad Naeem, Mohamed Nour Eldin, Jinkoo Kim
Abstract:
In this study, the seismic performance and life cycle cost (LCC) are evaluated of the structure retrofitted with the damped cable system (DCS). The DCS is a seismic retrofit system composed of a high-strength steel cable and pressurized viscous dampers. The analysis model of the system is first derived using various link elements in SAP2000, and fragility curves of the structure retrofitted with the DCS and viscous dampers are obtained using incremental dynamic analyses. The analysis results show that the residual displacements of the structure equipped with the DCS are smaller than those of the structure with retrofitted with only conventional viscous dampers, due to the enhanced stiffness/strength and self-centering capability of the damped cable system. The fragility analysis shows that the structure retrofitted with the DCS has the least probability of reaching the specific limit states compared to the bare structure and the structure with viscous damper. It is also observed that the initial cost of the DCS method required for the seismic retrofit is smaller than that of the structure with viscous dampers and that the LCC of the structure equipped with the DCS is smaller than that of the structure with viscous dampers.Keywords: damped cable system, fragility curve, life cycle cost, seismic retrofit, self-centering
Procedia PDF Downloads 55119217 Rathke’s Cleft Cyst Presenting as Unilateral Visual Field Defect
Authors: Ritesh Verma, Manisha Rathi, Chand Singh Dhull, Sumit Sachdeva, Jitender Phogat
Abstract:
A Rathke's cleft cyst is a benign growth found on the pituitary gland in the brain, specifically a fluid-filled cyst in the posterior portion of the anterior pituitary gland. It occurs when the Rathke's pouch does not develop properly and ranges in size from 2 to 40mm in diameter. A 38-year-old male presented to the outpatient department with loss of vision in the inferior quadrant of the left eye since 15 days. Visual acuity was 6/6 in the right eye and 6/9 in the left eye. Visual field analysis by HFA-24-2 revealed an inferior field defect extending to the supero-temporal quadrant in the left eye. MRI brain and orbit was advised to the patient and it revealed a well defined cystic pituitary adenoma indenting left optic nerve near optic chiasm consistent with the diagnosis of Rathke’s cleft cyst (RCC). The patient was referred to neurosurgery department for further management. Symptoms vary greatly between individuals having RCCs. RCCs can be non-functioning, functioning, or both. Besides headaches, neurocognitive deficits are almost always present but have a high rate of immediate reversal if the cyst is properly treated or drained.Keywords: pituitary tumors, rathke’s cleft cyst, visual field defects, vision loss
Procedia PDF Downloads 20519216 Analysis of Factors Influencing the Response Time of an Aspirating Gaseous Agent Concentration Detection Method
Authors: Yu Guan, Song Lu, Wei Yuan, Heping Zhang
Abstract:
Gas fire extinguishing system is widely used due to its cleanliness and efficiency, and since its spray will be affected by many factors such as convection and obstacles in jetting region, so in order to evaluate its effectiveness, detecting concentration distribution in the jetting area is indispensable, which is commonly achieved by aspirating concentration detection technique. During the concentration measurement, the response time of detector is a very important parameter, especially for those fire-extinguishing systems with rapid gas dispersion. Long response time will not only underestimate its concentration but also prolong the change of concentration with time. Therefore it is necessary to analyze the factors influencing the response time. In the paper, an aspirating concentration detection method was introduced, which is achieved by using a small critical nozzle and a laminar flowmeter, and because of the response time is mainly related to the gas transport process from sampling site to the sensor, the effects of exhaust pipe size, gas flow rate, and gas concentration on its response time were analyzed. During the research, Bromotrifluoromethane (CBrF₃) was used. The effect of the sampling tube was investigated with different length of 1, 2, 3, 4 and 5 m (5mm in pipe diameter) and different pipe diameter of 3, 4, 5, 6 and 8 mm (3m in length). The effect of gas flow rate was analyzed by changing the throat diameter of the critical nozzle with 0.5, 0.682, 0.75, 0.8, 0.84 and 0.88 mm. The effect of gas concentration on response time was studied with the concentration range of 0-25%. The result showed that the response time increased with the increase of both the length and diameter of the sampling pipe, and the effect of length on response time was linear, but for the effect of diameter, it was exponential. It was also found that as the throat diameter of critical nozzle increased, the response time reduced a lot, in other words, gas flow rate has a great influence on response time. For the effect of gas concentration, the response time increased with the increase of the CBrF₃ concentration, and the slope of the curve was reduced.Keywords: aspirating concentration detection, fire extinguishing, gaseous agent, response time
Procedia PDF Downloads 27019215 Parental Drinking and Risky Alcohol Related Behaviors: Predicting Binge Drinking Trajectories and Their Influence on Impaired Driving among College Students
Authors: Shiran Bord, Assaf Oshri, Matthew W. Carlson, Sihong Liu
Abstract:
Background: Alcohol-impaired driving (AID) and binge drinking are major health concerns among college students. Although the link between binge drinking and AID is well established, knowledge regarding binge drinking patterns, the factors influencing binge drinking, and the associations between consumption patterns and alcohol-related risk behaviors is lacking. Aims: To examine heterogeneous trajectories of binge drinking during college and tests factors that might predict class membership as well as class membership outcomes. Methods: Data were obtained from a sample of 1,265 college students (Mage = 18.5, SD = .66) as part of the Longitudinal Study of Violence Against Women (N = 1,265; 59.3% female; 69.2% white). Analyses were completed in three stages. First, a growth curve analysis was conducted to identify trajectories of binge drinking over time. Second, growth curve mixture modeling analyses were pursued to assess unobserved growth trajectories of binge drinking without predictors. Lastly, parental drinking variables were added to the model as predictors of class membership, and AID and being a passenger of a drunk driver were added to the model as outcomes. Results: Three binge drinking trajectories were identified: high-convex, medium concave and low-increasing. Parental drinking was associated with being in high-convex and medium-concave classes. Compared to the low-increasing class, the high convex and medium concave classes reported more AID and being a passenger of a drunk driver more frequently. Conclusions: Parental drinking may affect children’s later engagement in AID. Efforts should focus on parents' education regarding the consequences of parental modeling of alcohol consumption.Keywords: alcohol impaired driving, alcohol consumption, binge drinking, college students, parental modeling
Procedia PDF Downloads 28019214 Effect of Microstructure and Texture of Magnesium Alloy Due to Addition of Pb
Authors: Yebeen Ji, Jimin Yun, Kwonhoo Kim
Abstract:
Magnesium alloys were limited for industrial applications due to having a limited slip system and high plastic anisotropy. It has been known that specific textures were formed during processing (rolling, etc.), and These textures cause poor formability. To solve these problems, many researchers have studied controlling texture by adding rare-earth elements. However, the high cost limits their use; therefore, alternatives are needed to replace them. Although Pb addition doesn’t directly improve magnesium properties, it has been known to suppress the diffusion of other alloying elements and reduce grain boundary energy. These characteristics are similar to the additions of rare-earth elements, and a similar texture behavior is expected as well. However, there is insufficient research on this. Therefore, this study investigates the behavior of texture and microstructure development after adding Pb to magnesium. This study compared and analyzed AZ61 alloy and Mg-15wt%Pb alloy to determine the effect of adding solute elements. The alloy was hot rolled and annealed to form a single phase and initial texture. Afterward, the specimen was set to contraction and elongate parallel to the rolling surface and the rolling direction and then subjected to high-temperature plane strain compression under the conditions of 723K and 0.05/s. Microstructural analysis and texture measurements were performed by SEM-EBSD. The peak stress in the true strain-stress curve after compression was higher in AZ61, but the shape of the flow curve was similar for both alloys. For both alloys, continuous dynamic recrystallization was confirmed to occur during the compression process. The basal texture developed parallel to the compressed surface, and the pole density was lower in the Mg-15wt%Pb alloy. It is confirmed that this change in behavior is because the orientation distribution of recrystallized grains has a more random orientation compared to the parent grains when Pb is added.Keywords: Mg, texture, Pb, DRX
Procedia PDF Downloads 4919213 Active Part of the Burnishing Tool Effect on the Physico-Geometric Aspect of the Superficial Layer of 100C6 and 16NC6 Steels
Authors: Tarek Litim, Ouahiba Taamallah
Abstract:
Burnishing is a mechanical surface treatment that combines several beneficial effects on the two steel grades studied. The application of burnishing to the ball or to the tip favors a better roughness compared to turning. In addition, it allows the consolidation of the surface layers through work hardening phenomena. The optimal effects are closely related to the treatment parameters and the active part of the device. With an improvement of 78% on the roughness, burnishing can be defined as a finishing operation in the machining range. With a 44% gain in consolidation rate, this treatment is an effective process for material consolidation. These effects are affected by several factors. The factors V, f, P, r, and i have the most significant effects on both roughness and hardness. Ball or tip burnishing leads to the consolidation of the surface layers of both grades 100C6 and 16NC6 steels by work hardening. For each steel grade and its mechanical treatment, the rational tensile curve has been drawn. Lüdwick's law is used to better plot the work hardening curve. For both grades, a material hardening law is established. For 100C6 steel, these results show a work hardening coefficient and a consolidation rate of 0.513 and 44, respectively, compared to the surface layers processed by turning. When 16NC6 steel is processed, the work hardening coefficient is about 0.29. Hardness tests characterize well the burnished depth. The layer affected by work hardening can reach up to 0.4 mm. Simulation of the tests is of great importance to provide the details at the local scale of the material. Conventional tensile curves provide a satisfactory indication of the toughness of 100C6 and 16NC6 materials. A simulation of the tensile curves revealed good agreement between the experimental and simulation results for both steels.Keywords: 100C6 steel, 16NC6 steel, burnishing, work hardening, roughness, hardness
Procedia PDF Downloads 16819212 Transcriptome Analysis Reveals Role of Long Non-Coding RNA NEAT1 in Dengue Patients
Authors: Abhaydeep Pandey, Shweta Shukla, Saptamita Goswami, Bhaswati Bandyopadhyay, Vishnampettai Ramachandran, Sudhanshu Vrati, Arup Banerjee
Abstract:
Background: Long non-coding RNAs (lncRNAs) are the important regulators of gene expression and play important role in viral replication and disease progression. The role of lncRNA genes in the pathogenesis of Dengue virus-mediated pathogenesis is currently unknown. Methods: To gain additional insights, we utilized an unbiased RNA sequencing followed by in silico analysis approach to identify the differentially expressed lncRNA and genes that are associated with dengue disease progression. Further, we focused our study on lncRNAs NEAT1 (Nuclear Paraspeckle Assembly Transcript 1) as it was found to be differentially expressed in PBMC of dengue infected patients. Results: The expression of lncRNAs NEAT1, as compared to dengue infection (DI), was significantly down-regulated as the patients developed the complication. Moreover, pairwise analysis on follow up patients confirmed that suppression of NEAT1 expression was associated with rapid fall in platelet count in dengue infected patients. Severe dengue patients (DS) (n=18; platelet count < 20K) when recovered from infection showing high NEAT1 expression as it observed in healthy donors. By co-expression network analysis and subsequent validation, we revealed that coding gene; IFI27 expression was significantly up-regulated in severe dengue cases and negatively correlated with NEAT1 expression. To discriminate DI from dengue severe, receiver operating characteristic (ROC) curve was calculated. It revealed sensitivity and specificity of 100% (95%CI: 85.69 – 97.22) and area under the curve (AUC) = 0.97 for NEAT1. Conclusions: Altogether, our first observations demonstrate that monitoring NEAT1and IFI27 expression in dengue patients could be useful in understanding dengue virus-induced disease progression and may be involved in pathophysiological processes.Keywords: dengue, lncRNA, NEAT1, transcriptome
Procedia PDF Downloads 31019211 Development of 3D Particle Method for Calculating Large Deformation of Soils
Authors: Sung-Sik Park, Han Chang, Kyung-Hun Chae, Sae-Byeok Lee
Abstract:
In this study, a three-dimensional (3D) Particle method without using grid was developed for analyzing large deformation of soils instead of using ordinary finite element method (FEM) or finite difference method (FDM). In the 3D Particle method, the governing equations were discretized by various particle interaction models corresponding to differential operators such as gradient, divergence, and Laplacian. The Mohr-Coulomb failure criterion was incorporated into the 3D Particle method to determine soil failure. The yielding and hardening behavior of soil before failure was also considered by varying viscosity of soil. First of all, an unconfined compression test was carried out and the large deformation following soil yielding or failure was simulated by the developed 3D Particle method. The results were also compared with those of a commercial FEM software PLAXIS 3D. The developed 3D Particle method was able to simulate the 3D large deformation of soils due to soil yielding and calculate the variation of normal and shear stresses following clay deformation.Keywords: particle method, large deformation, soil column, confined compressive stress
Procedia PDF Downloads 57219210 Colorful Textiles with Antimicrobial Property Using Natural Dyes as Effective Green Finishing Agents
Authors: Shahid-ul-Islam, Faqeer Mohammad
Abstract:
The present study was conducted to investigate the effect of annatto, teak and flame of the forest natural dyes on color, fastness, and antimicrobial property of protein based textile substrate. The color strength (K/S) of wool samples at various concentrations of dyes were analysed using a Reflective Spectrophotometer. The antimicrobial activity of natural dyes before and after application on wool was tested against common human pathogens Escherichia coli, Staphylococcus aureus, and Candida albicans, by using micro-broth dilution method, disc diffusion assay and growth curve studies. The structural morphology of natural protein fibre (wool) was investigated by Scanning Electron Microscopy (SEM). Annatto and teak natural dyes proved very effective in inhibiting the microbial growth in solution phase and after application on wool and resulted in a broad beautiful spectrum of colors with exceptional fastness properties. The results encourage the search and exploitation of new plant species as source of dyes to replace toxic synthetic antimicrobial agents currently used in textile industry.Keywords: annatto, antimicrobial agents, natural dyes, green textiles
Procedia PDF Downloads 31819209 The Implementation of Secton Method for Finding the Root of Interpolation Function
Authors: Nur Rokhman
Abstract:
A mathematical function gives relationship between the variables composing the function. Interpolation can be viewed as a process of finding mathematical function which goes through some specified points. There are many interpolation methods, namely: Lagrange method, Newton method, Spline method etc. For some specific condition, such as, big amount of interpolation points, the interpolation function can not be written explicitly. This such function consist of computational steps. The solution of equations involving the interpolation function is a problem of solution of non linear equation. Newton method will not work on the interpolation function, for the derivative of the interpolation function cannot be written explicitly. This paper shows the use of Secton method to determine the numerical solution of the function involving the interpolation function. The experiment shows the fact that Secton method works better than Newton method in finding the root of Lagrange interpolation function.Keywords: Secton method, interpolation, non linear function, numerical solution
Procedia PDF Downloads 37919208 Mathematical Model to Quantify the Phenomenon of Democracy
Authors: Mechlouch Ridha Fethi
Abstract:
This paper presents a recent mathematical model in political sciences concerning democracy. The model is represented by a logarithmic equation linking the Relative Index of Democracy (RID) to Participation Ratio (PR). Firstly the meanings of the different parameters of the model were presented; and the variation curve of the RID according to PR with different critical areas was discussed. Secondly, the model was applied to a virtual group where we show that the model can be applied depending on the gender. Thirdly, it was observed that the model can be extended to different language models of democracy and that little use to assess the state of democracy for some International organizations like UNO.Keywords: democracy, mathematic, modelization, quantification
Procedia PDF Downloads 36819207 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.Keywords: CT simulator, radiotherapy, quality control, QA programme
Procedia PDF Downloads 53319206 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation
Authors: Constantin Z. Leshan
Abstract:
Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps
Procedia PDF Downloads 42519205 Ductility Spectrum Method for the Design and Verification of Structures
Authors: B. Chikh, L. Moussa, H. Bechtoula, Y. Mehani, A. Zerzour
Abstract:
This study presents a new method, applicable to evaluation and design of structures has been developed and illustrated by comparison with the capacity spectrum method (CSM, ATC-40). This method uses inelastic spectra and gives peak responses consistent with those obtained when using the nonlinear time history analysis. Hereafter, the seismic demands assessment method is called in this paper DSM, Ductility Spectrum Method. It is used to estimate the seismic deformation of Single-Degree-Of-Freedom (SDOF) systems based on DDRS, Ductility Demand Response Spectrum, developed by the author.Keywords: seismic demand, capacity, inelastic spectra, design and structure
Procedia PDF Downloads 39619204 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 6519203 Presenting the Mathematical Model to Determine Retention in the Watersheds
Authors: S. Shamohammadi, L. Razavi
Abstract:
This paper based on the principle concepts of SCS-CN model, a new mathematical model for computation of retention potential (S) presented. In the mathematical model, not only precipitation-runoff concepts in SCS-CN model are precisely represented in a mathematical form, but also new concepts, called “maximum retention” and “total retention” is introduced, and concepts of potential retention capacity, maximum retention, and total retention have been separated from each other. In the proposed model, actual retention (F), maximum actual retention (Fmax), total retention (S), maximum retention (Smax), and potential retention (Sp), for the first time clearly defined, so that Sp is not variable, but a function of morphological characteristics of the watershed. Indeed, based on the mathematical relation of the conceptual curve of SCS-CN model, the proposed model provides a new method for the computation of actual retention in watershed and it simply determined runoff based on. In the corresponding relations, in addition to Precipitation (P), Initial retention (Ia), cumulative values of actual retention capacity (F), total retention (S), runoff (Q), antecedent moisture (M), potential retention (Sp), total retention (S), we introduced Fmax and Fmin referring to maximum and minimum actual retention, respectively. As well as, ksh is a coefficient which depends on morphological characteristics of the watershed. Advantages of the modified version versus the original model include a better precision, higher performance, easier calibration and speed computing.Keywords: model, mathematical, retention, watershed, SCS
Procedia PDF Downloads 45719202 Nonparametric Path Analysis with a Truncated Spline Approach in Modeling Waste Management Behavior Patterns
Authors: Adji Achmad Rinaldo Fernandes, Usriatur Rohma
Abstract:
Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best truncated spline nonparametric path function between linear and quadratic polynomial degrees with 1, 2, and 3 knot points and to determine the significance of estimating the best truncated spline nonparametric path function in the model of the effect of perceived benefits and perceived convenience on behavior to convert waste into economic value through the intention variable of changing people's mindset about waste using the t test statistic at the jackknife resampling stage. The data used in this study are primary data obtained from research grants. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3 knot points. In addition, the significance of the best truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.Keywords: nonparametric path analysis, truncated spline, linear, kuadratic, behavior to turn waste into economic value, jackknife resampling
Procedia PDF Downloads 4719201 Top-Down Construction Method in Concrete Structures: Advantages and Disadvantages of This Construction Method
Authors: Hadi Rouhi Belvirdi
Abstract:
The construction of underground structures using the traditional method, which begins with excavation and the implementation of the foundation of the underground structure, continues with the construction of the main structure from the ground up, and concludes with the completion of the final ceiling, is known as the Bottom-Up Method. In contrast to this method, there is an advanced technique called the Top-Down Method, which has practically replaced the traditional construction method in large projects in industrialized countries in recent years. Unlike the traditional approach, this method starts with the construction of surrounding walls, columns, and the final ceiling and is completed with the excavation and construction of the foundation of the underground structure. Some of the most significant advantages of this method include the elimination or minimization of formwork surfaces, the removal of temporary bracing during excavation, the creation of some traffic facilities during the construction of the structure, and the possibility of using it in limited and high-traffic urban spaces. Despite these numerous advantages, unfortunately, there is still insufficient awareness of this method in our country, to the extent that it can be confidently stated that most stakeholders in the construction industry are unaware of the existence of such a construction method. However, it can be utilized as a very important execution option alongside other conventional methods in the construction of underground structures. Therefore, due to the extensive practical capabilities of this method, this article aims to present a methodology for constructing underground structures based on the aforementioned advanced method to the scientific community of the country, examine the advantages and limitations of this method and their impacts on time and costs, and discuss its application in urban spaces. Finally, some underground structures executed in the Ahvaz urban rail, which are being implemented using this advanced method to the best of our best knowledge, will be introduced.Keywords: top-down method, bottom-up method, underground structure, construction method
Procedia PDF Downloads 1219200 Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners
Authors: Saheed A. Gbadegeshin
Abstract:
Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.Keywords: commercialization method, technology, knowledge, intellectual property, innovation, invention
Procedia PDF Downloads 34219199 Urinary Exosome miR-30c-5p as a Biomarker for Early-Stage Clear Cell Renal Cell Carcinoma
Authors: Shangqing Song, Bin Xu, Yajun Cheng, Zhong Wang
Abstract:
miRNAs derived from exosomes exist in a body fluid such as urine were regarded as potential biomarkers for various human cancers diagnosis and prognosis, as mature miRNAs can be steadily preserved by exosomes. However, its potential value in clear cell renal cell carcinoma (ccRCC) diagnosis and prognosis remains unclear. In the present study, differentially expressed miRNAs from urinal exosomes were identified by next-generation sequencing (NGS) technology. The 16 differentially expressed miRNAs were identified between ccRCC patients and healthy donors. To explore the specific diagnosis biomarker of ccRCC, we validated these urinary exosomes from 70 early-stage renal cancer patients, 30 healthy people and other urinary system cancers, including 30 early-stage prostate cancer patients and 30 early-stage bladder cancer patients by qRT-PCR. The results showed that urinary exosome miR-30c-5p could be stably amplified and meanwhile the expression of miR-30c-5p has no significant difference between other urinary system cancers and healthy control, however, expression level of miR-30c-5p in urinary exosomal of ccRCC patients was lower than healthy people and receiver operation characterization (ROC) curve showed that the area under the curve (AUC) values was 0.8192 (95% confidence interval was 0.7388-0.8996, P= 0.0000). In addition, up-regulating miR-30c-5p expression could inhibit renal cell carcinoma cells growth. Lastly, HSP5A was found as a direct target gene of miR-30c-5p. HSP5A depletion reversed the promoting effect of ccRCC growth casued by miR-30c-5p inhibitor, respectively. In conclusion, this study demonstrated that urinary exosomal miR-30c-5p is readily accessible as diagnosis biomarker of early-stage ccRCC, and miR-30c-5p might modulate the expression of HSPA5, which correlated with the progression of ccRCC.Keywords: clear cell renal cell carcinoma, exosome, HSP5A, miR-30c-5p
Procedia PDF Downloads 26719198 Critical Comparison of Two Teaching Methods: The Grammar Translation Method and the Communicative Teaching Method
Authors: Aicha Zohbie
Abstract:
The purpose of this paper is to critically compare two teaching methods: the communicative method and the grammar-translation method. The paper presents the importance of language awareness as an approach to teaching and learning language and some challenges that language teachers face. In addition, the paper strives to determine whether the adoption of communicative teaching methods or the grammar teaching method would be more effective to teach a language. A variety of features are considered for comparing the two methods: the purpose of each method, techniques used, teachers’ and students’ roles, the use of L1, the skills that are emphasized, the correction of students’ errors, and the students’ assessments. Finally, the paper includes suggestions and recommendations for implementing an approach that best meets the students’ needs in a classroom.Keywords: language teaching methods, language awareness, communicative method grammar translation method, advantages and disadvantages
Procedia PDF Downloads 15119197 Investigation of Threshold Voltage Shift in Gamma Irradiated N-Channel and P-Channel MOS Transistors of CD4007
Authors: S. Boorboor, S. A. H. Feghhi, H. Jafari
Abstract:
The ionizing radiations cause different kinds of damages in electronic components. MOSFETs, most common transistors in today’s digital and analog circuits, are severely sensitive to TID damage. In this work, the threshold voltage shift of CD4007 device, which is an integrated circuit including P-channel and N-channel MOS transistors, was investigated for low dose gamma irradiation under different gate bias voltages. We used linear extrapolation method to extract threshold voltage from ID-VG characteristic curve. The results showed that the threshold voltage shift was approximately 27.5 mV/Gy for N-channel and 3.5 mV/Gy for P-channel transistors at the gate bias of |9 V| after irradiation by Co-60 gamma ray source. Although the sensitivity of the devices under test were strongly dependent to biasing condition and transistor type, the threshold voltage shifted linearly versus accumulated dose in all cases. The overall results show that the application of CD4007 as an electronic buffer in a radiation therapy system is limited by TID damage. However, this integrated circuit can be used as a cheap and sensitive radiation dosimeter for accumulated dose measurement in radiation therapy systems.Keywords: threshold voltage shift, MOS transistor, linear extrapolation, gamma irradiation
Procedia PDF Downloads 28319196 Supervised/Unsupervised Mahalanobis Algorithm for Improving Performance for Cyberattack Detection over Communications Networks
Authors: Radhika Ranjan Roy
Abstract:
Deployment of machine learning (ML)/deep learning (DL) algorithms for cyberattack detection in operational communications networks (wireless and/or wire-line) is being delayed because of low-performance parameters (e.g., recall, precision, and f₁-score). If datasets become imbalanced, which is the usual case for communications networks, the performance tends to become worse. Complexities in handling reducing dimensions of the feature sets for increasing performance are also a huge problem. Mahalanobis algorithms have been widely applied in scientific research because Mahalanobis distance metric learning is a successful framework. In this paper, we have investigated the Mahalanobis binary classifier algorithm for increasing cyberattack detection performance over communications networks as a proof of concept. We have also found that high-dimensional information in intermediate features that are not utilized as much for classification tasks in ML/DL algorithms are the main contributor to the state-of-the-art of improved performance of the Mahalanobis method, even for imbalanced and sparse datasets. With no feature reduction, MD offers uniform results for precision, recall, and f₁-score for unbalanced and sparse NSL-KDD datasets.Keywords: Mahalanobis distance, machine learning, deep learning, NS-KDD, local intrinsic dimensionality, chi-square, positive semi-definite, area under the curve
Procedia PDF Downloads 7819195 Numerical Iteration Method to Find New Formulas for Nonlinear Equations
Authors: Kholod Mohammad Abualnaja
Abstract:
A new algorithm is presented to find some new iterative methods for solving nonlinear equations F(x)=0 by using the variational iteration method. The efficiency of the considered method is illustrated by example. The results show that the proposed iteration technique, without linearization or small perturbation, is very effective and convenient.Keywords: variational iteration method, nonlinear equations, Lagrange multiplier, algorithms
Procedia PDF Downloads 54519194 Comparison of Finite-Element and IEC Methods for Cable Thermal Analysis under Various Operating Environments
Authors: M. S. Baazzim, M. S. Al-Saud, M. A. El-Kady
Abstract:
In this paper, steady-state ampacity (current carrying capacity) evaluation of underground power cable system by using analytical and numerical methods for different conditions (depth of cable, spacing between phases, soil thermal resistivity, ambient temperature, wind speed), for two system voltage level were used 132 and 380 kV. The analytical method or traditional method that was used is based on the thermal analysis method developed by Neher-McGrath and further enhanced by International Electrotechnical Commission (IEC) and published in standard IEC 60287. The numerical method that was used is finite element method and it was recourse commercial software based on finite element method.Keywords: cable ampacity, finite element method, underground cable, thermal rating
Procedia PDF Downloads 37919193 Evaluation of Hepatic Metabolite Changes for Differentiation Between Non-Alcoholic Steatohepatitis and Simple Hepatic Steatosis Using Long Echo-Time Proton Magnetic Resonance Spectroscopy
Authors: Tae-Hoon Kim, Kwon-Ha Yoon, Hong Young Jun, Ki-Jong Kim, Young Hwan Lee, Myeung Su Lee, Keum Ha Choi, Ki Jung Yun, Eun Young Cho, Yong-Yeon Jeong, Chung-Hwan Jun
Abstract:
Purpose: To assess the changes of hepatic metabolite for differentiation between non-alcoholic steatohepatitis (NASH) and simple steatosis on proton magnetic resonance spectroscopy (1H-MRS) in both humans and animal model. Methods: The local institutional review board approved this study and subjects gave written informed consent. 1H-MRS measurements were performed on a localized voxel of the liver using a point-resolved spectroscopy (PRESS) sequence and hepatic metabolites of alanine (Ala), lactate/triglyceride (Lac/TG), and TG were analyzed in NASH, simple steatosis and control groups. The group difference was tested with the ANOVA and Tukey’s post-hoc tests, and diagnostic accuracy was tested by calculating the area under the receiver operating characteristics (ROC) curve. The associations between metabolic concentration and pathologic grades or non-alcoholic fatty liver disease(NAFLD) activity scores were assessed by the Pearson’s correlation. Results: Patient with NASH showed the elevated Ala(p<0.001), Lac/TG(p < 0.001), TG(p < 0.05) concentration when compared with patients who had simple steatosis and healthy controls. The NASH patients were higher levels in Ala(mean±SEM, 52.5±8.3 vs 2.0±0.9; p < 0.001), Lac/TG(824.0±168.2 vs 394.1±89.8; p < 0.05) than simple steatosis. The area under the ROC curve to distinguish NASH from simple steatosis was 1.00 (95% confidence interval; 1.00, 1.00) with Ala and 0.782 (95% confidence interval; 0.61, 0.96) with Lac/TG. The Ala and Lac/TG levels were well correlated with steatosis grade, lobular inflammation, and NAFLD activity scores. The metabolic changes in human were reproducible to a mice model induced by streptozotocin injection and a high-fat diet. Conclusion: 1H-MRS would be useful for differentiation of patients with NASH and simple hepatic steatosis.Keywords: non-alcoholic fatty liver disease, non-alcoholic steatohepatitis, 1H MR spectroscopy, hepatic metabolites
Procedia PDF Downloads 32619192 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 279