Search results for: light curve modelling
5917 Solar Photocatalysis of Methyl Orange Using Multi-Ion Doped TiO2 Catalysts
Authors: Victor R. Thulari, John Akach, Haleden Chiririwa, Aoyi Ochieng
Abstract:
Solar-light activated titanium dioxide photocatalysts were prepared by hydrolysis of titanium (IV) isopropoxide with thiourea, followed by calcinations at 450 °C. The experiments demonstrated that methyl orange in aqueous solutions were successfully degraded under solar light using doped TiO2. The photocatalytic oxidation of a mono azo methyl-orange dye has been investigated in multi ion doped TiO2 and solar light. Solutions were irradiated by solar-light until high removal was achieved. It was found that there was no degradation of methyl orange in the dark and in the absence of TiO2. Varieties of laboratory prepared TiO2 catalysts both un-doped and doped using titanium (IV) isopropoxide and thiourea as a dopant were tested in order to compare their photoreactivity. As a result, it was found that the efficiency of the process strongly depends on the working conditions. The highest degradation rate of methyl orange was obtained at optimum dosage using commercially produced TiO2. Our work focused on laboratory synthesized catalyst and the maximum methyl orange removal was achieved at 81% with catalyst loading of 0.04 g/L, initial pH of 3 and methyl orange concentration of 0.005 g/L using multi-ion doped catalyst. The kinetics of photocatalytic methyl orange dye stuff degradation was found to follow a pseudo-first-order rate law. The presence of the multi-ion dopant (thiourea) enhanced the photoefficiency of the titanium dioxide catalyst.Keywords: degradation, kinetics, methyl orange, photocatalysis
Procedia PDF Downloads 3375916 Screening Post-Menopausal Women for Osteoporosis by Complex Impedance Measurements of the Dominant Arm
Authors: Yekta Ülgen, Fırat Matur
Abstract:
Cole-Cole parameters of 40 post-menopausal women are compared with their DEXA bone mineral density measurements. Impedance characteristics of four extremities are compared; left and right extremities are statistically same, but lower extremities are statistically different than upper ones due to their different fat content. The correlation of Cole-Cole impedance parameters to bone mineral density (BMD) is observed to be higher for a dominant arm. With the post menopausal population, ANOVA tests of the dominant arm characteristic frequency, as a predictor for DEXA classified osteopenic and osteoporotic population around the lumbar spine, is statistically very significant. When used for total lumbar spine osteoporosis diagnosis, the area under the Receiver Operating Curve of the characteristic frequency is 0.875, suggesting that the Cole-Cole plot characteristic frequency could be a useful diagnostic parameter when integrated into standard screening methods for osteoporosis. Moreover, the characteristic frequency can be directly measured by monitoring frequency driven the angular behavior of the dominant arm without performing any complex calculation.Keywords: bioimpedance spectroscopy, bone mineral density, osteoporosis, characteristic frequency, receiver operating curve
Procedia PDF Downloads 5225915 Modelling the Effect of Distancing and Wearing of Face Masks on Transmission of COVID-19 Infection Dynamics
Authors: Nurudeen Oluwasola Lasisi
Abstract:
The COVID-19 is an infection caused by coronavirus, which has been designated as a pandemic in the world. In this paper, we proposed a model to study the effect of distancing and wearing masks on the transmission of COVID-19 infection dynamics. The invariant region of the model is established. The COVID-19 free equilibrium and the reproduction number of the model were obtained. The local and global stability of the model is determined using the linearization technique method and Lyapunov method. It was found that COVID-19 free equilibrium state is locally asymptotically stable in feasible region Ω if R₀ < 1 and globally asymptomatically stable if R₀ < 1, otherwise unstable if R₀ > 1. More so, numerical analysis and simulations of the dynamics of the COVID-19 infection are presented.Keywords: distancing, reproduction number, wearing of mask, local and global stability, modelling, transmission
Procedia PDF Downloads 1385914 On Board Measurement of Real Exhaust Emission of Light-Duty Vehicles in Algeria
Authors: R. Kerbachi, S. Chikhi, M. Boughedaoui
Abstract:
The study presents an analysis of the Algerian vehicle fleet and resultant emissions. The emission measurement of air pollutants emitted by road transportation (CO, THC, NOX and CO2) was conducted on 17 light duty vehicles in real traffic. This sample is representative of the Algerian light vehicles in terms of fuel quality (gasoline, diesel and liquefied petroleum gas) and the technology quality (injection system and emission control). The experimental measurement methodology of unit emission of vehicles in real traffic situation is based on the use of the mini-Constant Volume Sampler for gas sampling and a set of gas analyzers for CO2, CO, NOx and THC, with an instrumentation to measure kinematics, gas temperature and pressure. The apparatus is also equipped with data logging instrument and data transfer. The results were compared with the database of the European light vehicles (Artemis). It was shown that the technological injection liquefied petroleum gas (LPG) has significant impact on air pollutants emission. Therefore, with the exception of nitrogen oxide compounds, uncatalyzed LPG vehicles are more effective in reducing emissions unit of air pollutants compared to uncatalyzed gasoline vehicles. LPG performance seems to be lower under real driving conditions than expected on chassis dynamometer. On the other hand, the results show that uncatalyzed gasoline vehicles emit high levels of carbon monoxide, and nitrogen oxides. Overall, and in the absence of standards in Algeria, unit emissions are much higher than Euro 3. The enforcement of pollutant emission standard in developing countries is an important step towards introducing cleaner technology and reducing vehicular emissions.Keywords: on-board measurements of unit emissions of CO, HC, NOx and CO2, light vehicles, mini-CVS, LPG-fuel, artemis, Algeria
Procedia PDF Downloads 2755913 Discussion on Dispersion Curves of Non-penetrable Soils from in-Situ Seismic Dilatometer Measurements
Authors: Angelo Aloisio Dag, Pasquale Pasca, Massimo Fragiacomo, Ferdinando Totani, Gianfranco Totani
Abstract:
The estimate of the velocity of shear waves (Vs) is essential in seismic engineering to characterize the dynamic response of soils. There are various direct methods to estimate the Vs. The authors report the results of site characterization in Macerata, where they measured the Vs using the seismic dilatometer in a 100m deep borehole. The standard Vs estimation originates from the cross-correlation between the signals acquired by two geophones at increasing depths. This paper focuses on the estimate of the dependence of Vs on the wavenumber. The dispersion curves reveal an unexpected hyperbolic dispersion curve typical of Lamb waves. Interestingly, the contribution of Lamb waves may be notable up to 100m depth. The amplitude of surface waves decrease rapidly with depth: still, their influence may be essential up to depths considered unusual for standard geotechnical investigations, where their effect is generally neglected. Accordingly, these waves may bias the outcomes of the standard Vs estimations, which ignore frequency-dependent phenomena. The paper proposes an enhancement of the accepted procedure to estimate Vs and addresses the importance of Lamb waves in soil characterization.Keywords: dispersion curve, seismic dilatometer, shear wave, soil mechanics
Procedia PDF Downloads 1745912 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 4855911 Application of Decline Curve Analysis to Depleted Wells in a Cluster and then Predicting the Performance of Currently Flowing Wells
Authors: Satish Kumar Pappu
Abstract:
The most common questions which are frequently asked in oil and gas industry are how much is the current production rate from a particular well and what is the approximate predicted life of that well. These questions can be answered through forecasting of important realistic data like flowing tubing hole pressures FTHP, Production decline curves which are used predict the future performance of a well in a reservoir. With the advent of directional drilling, cluster well drilling has gained much importance and in-fact has even revolutionized the whole world of oil and gas industry. An oil or gas reservoir can generally be described as a collection of several overlying, producing and potentially producing sands in to which a number of wells are drilled depending upon the in-place volume and several other important factors both technical and economical in nature, in some sands only one well is drilled and in some, more than one. The aim of this study is to derive important information from the data collected over a period of time at regular intervals on a depleted well in a reservoir sand and apply this information to predict the performance of other wells in that reservoir sand. The depleted wells are the most common observations when an oil or gas field is being visited, w the application of this study more realistic in nature.Keywords: decline curve analysis, estimation of future gas reserves, reservoir sands, reservoir risk profile
Procedia PDF Downloads 4385910 Linking Business Process Models and System Models Based on Business Process Modelling
Authors: Faisal A. Aburub
Abstract:
Organizations today need to invest in software in order to run their businesses, and to the organizations’ objectives, the software should be in line with the business process. This research presents an approach for linking process models and system models. Particularly, the new approach aims to synthesize sequence diagram based on role activity diagram (RAD) model. The approach includes four steps namely: Create business process model using RAD, identify computerized activities, identify entities in sequence diagram and identify messages in sequence diagram. The new approach has been validated using the process of student registration in University of Petra as a case study. Further research is required to validate the new approach using different domains.Keywords: business process modelling, system models, role activity diagrams, sequence diagrams
Procedia PDF Downloads 3865909 The Nursing Rounds System: Effect of Patient's Call Light Use, Bed Sores, Fall and Satisfaction Level
Authors: Bassem Saleh, Hussam Nusair, Nariman Al Zubadi, Shams Al Shloul, Usama Saleh
Abstract:
The nursing round system (NRS) means checking patients on an hourly basis during the A (0700–2200 h) shift and once every 2 h during the B (2200–0700 h) by the assigned nursing staff. The overall goal of this prospective study is to implement an NRS in a major rehabilitation centre—Sultan Bin Abdulaziz Humanitarian City—in the Riyadh area of the Kingdom of Saudi Arabia. The purposes of this study are to measure the effect of the NRS on: (i) the use of patient call light; (ii) the number of incidences of patients’ fall; (iii) the number of incidences of hospital-acquired bed sores; and (iv) the level of patients’ satisfaction. All patients hospitalized in the male stroke unit will be involved in this study. For the period of 8 weeks (17 December 2009–17 February 2010) All Nursing staff on the unit will record each call light and the patient’s need. Implementation of the NRS would start on 18 February 2010 and last for 8 weeks, until 18 April 2010. Data collected throughout this period will be compared with data collected during the 8 weeks period immediately preceding the implementation of the NRS (17 December 2009–17 February 2010) in order to measure the impact of the call light use. The following information were collected on all subjects involved in the study: (i) the Demographic Information Form; (ii) authors’ developed NRS Audit Form; (iii) Patient Call Light Audit Form; (iv) Patient Fall Audit Record; (v) Hospital-Acquired Bed Sores Audit Form; and (vi) hospital developed Patient Satisfaction Records. The findings suggested that a significant reduction on the use of call bell (P < 0.001), a significant reduction of fall incidence (P < 0.01) while pressure ulcer reduced by 50% before and after the implementation of NRS. In addition, the implementation of NRS increased patient satisfaction by 7/5 (P < 0.05).Keywords: call light, patient-care management, patient safety, patient satisfaction, rounds
Procedia PDF Downloads 3755908 Comparison Between a Droplet Digital PCR and Real Time PCR Method in Quantification of HBV DNA
Authors: Surangrat Srisurapanon, Chatchawal Wongjitrat, Navin Horthongkham, Ruengpung Sutthent
Abstract:
HBV infection causes a potential serious public health problem. The ability to detect the HBV DNA concentration is of the importance and improved continuously. By using quantitative Polymerase Chain Reaction (qPCR), several factors in standardized; source of material, calibration standard curve and PCR efficiency are inconsistent. Digital PCR (dPCR) is an alternative PCR-based technique for absolute quantification using Poisson's statistics without requiring a standard curve. Therefore, the aim of this study is to compare the data set of HBV DNA generated between dPCR and qPCR methods. All samples were quantified by Abbott’s real time PCR and 54 samples with 2 -6 log10 HBV DNA were selected for comparison with dPCR. Of these 54 samples, there were two outlier samples defined as negative by dPCR. Of these two, samples were defined as negative by dPCR, whereas 52 samples were positive by both the tests. The difference between the two assays was less than 0.25 log IU/mL in 24/52 samples (46%) of paired samples; less than 0.5 log IU/mL in 46/52 samples (88%) and less than 1 log in 50/52 samples (96%). The correlation coefficient was r=0.788 and P-value <0.0001. Comparison to qPCR, data generated by dPCR tend to be the overestimation in the sample with low HBV DNA concentration and underestimated in the sample with high viral load. The variation in DNA by dPCR measurement might be due to the pre-amplification bias, template. Moreover, a minor drawback of dPCR is the large quantity of DNA had to be used when compare to the qPCR. Since the technology is relatively new, the limitations of this assay will be improved.Keywords: hepatitis B virus, real time PCR, digital PCR, DNA quantification
Procedia PDF Downloads 4825907 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit
Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin
Abstract:
Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.Keywords: cloning free, time machine, teleportation, two-level system
Procedia PDF Downloads 765906 A Mathematical Model Approach Regarding the Children’s Height Development with Fractional Calculus
Authors: Nisa Özge Önal, Kamil Karaçuha, Göksu Hazar Erdinç, Banu Bahar Karaçuha, Ertuğrul Karaçuha
Abstract:
The study aims to use a mathematical approach with the fractional calculus which is developed to have the ability to continuously analyze the factors related to the children’s height development. Until now, tracking the development of the child is getting more important and meaningful. Knowing and determining the factors related to the physical development of the child any desired time would provide better, reliable and accurate results for childcare. In this frame, 7 groups for height percentile curve (3th, 10th, 25th, 50th, 75th, 90th, and 97th) of Turkey are used. By using discrete height data of 0-18 years old children and the least squares method, a continuous curve is developed valid for any time interval. By doing so, in any desired instant, it is possible to find the percentage and location of the child in Percentage Chart. Here, with the help of the fractional calculus theory, a mathematical model is developed. The outcomes of the proposed approach are quite promising compared to the linear and the polynomial method. The approach also yields to predict the expected values of children in the sense of height.Keywords: children growth percentile, children physical development, fractional calculus, linear and polynomial model
Procedia PDF Downloads 1495905 The Function of Polycomb Repressive Complex 2 (PRC2) In Plant Retrograde Signaling Pathway
Authors: Mingxi Zhou, Jiří Kubásek, Iva Mozgová
Abstract:
In Arabidopsis thaliana, histone 3 lysine 27 tri-methylation catalysed byPRC2 is playing essential functions in the regulation of plant development, growth, and reproduction[1-2]. Despite numerous studies related to the role of PRC2 in developmental control, how PRC2 works in the operational control in plants is unknown. In the present, the evidence that PRC2 probably participates in the regulation of retrograde singalling pathway in Arabidopsisis found. Firstly, we observed that the rosette size and biomass in PRC2-depletion mutants (clf-29 and swn-3) is significantly higher than WTunder medium light condition (ML: 125 µmol m⁻² s⁻²), while under medium high light condition (MHL: 300 µmol m⁻² s-2), the increase was reverse. Under ML condition, the photosynthesis related parameters determined by fluorCam did not show significant differences between WT and mutants, while the pigments concentration increased in the leaf of PRC2-depletion mutants, especially in swn. The dynamic of light-responsive genes and circadian clock genes expression by RT-qPCRwithin 24 hours in the mutants were comparable to WT. However, we observed upregulation of photosynthesis-associated nuclear genes in the PRC2-depletion mutants under chloroplast damaging condition (treated by lincomycin), corresponding to the so-called genome uncoupled (gun) phenotype. Here, we will present our results describing these phenotypes and our suggestion and outlook for studying the involvement of PRC2 in chloroplast-to-nucleus retrograde signalling.Keywords: PRC2, retrograde signalling, light acclimation, photosyntheis
Procedia PDF Downloads 1125904 Evaluation of Expected Annual Loss Probabilities of RC Moment Resisting Frames
Authors: Saemee Jun, Dong-Hyeon Shin, Tae-Sang Ahn, Hyung-Joon Kim
Abstract:
Building loss estimation methodologies which have been advanced considerably in recent decades are usually used to estimate socio and economic impacts resulting from seismic structural damage. In accordance with these methods, this paper presents the evaluation of an annual loss probability of a reinforced concrete moment resisting frame designed according to Korean Building Code. The annual loss probability is defined by (1) a fragility curve obtained from a capacity spectrum method which is similar to a method adopted from HAZUS, and (2) a seismic hazard curve derived from annual frequencies of exceedance per peak ground acceleration. Seismic fragilities are computed to calculate the annual loss probability of a certain structure using functions depending on structural capacity, seismic demand, structural response and the probability of exceeding damage state thresholds. This study carried out a nonlinear static analysis to obtain the capacity of a RC moment resisting frame selected as a prototype building. The analysis results show that the probability of being extensive structural damage in the prototype building is expected to 0.004% in a year.Keywords: expected annual loss, loss estimation, RC structure, fragility analysis
Procedia PDF Downloads 3985903 Architectural Adaptation for Road Humps Detection in Adverse Light Scenario
Authors: Padmini S. Navalgund, Manasi Naik, Ujwala Patil
Abstract:
Road hump is a semi-cylindrical elevation on the road made across specific locations of the road. The vehicle needs to maneuver the hump by reducing the speed to avoid car damage and pass over the road hump safely. Road Humps on road surfaces, if identified in advance, help to maintain the security and stability of vehicles, especially in adverse visibility conditions, viz. night scenarios. We have proposed a deep learning architecture adaptation by implementing the MISH activation function and developing a new classification loss function called "Effective Focal Loss" for Indian road humps detection in adverse light scenarios. We captured images comprising of marked and unmarked road humps from two different types of cameras across South India to build a heterogeneous dataset. A heterogeneous dataset enabled the algorithm to train and improve the accuracy of detection. The images were pre-processed, annotated for two classes viz, marked hump and unmarked hump. The dataset from these images was used to train the single-stage object detection algorithm. We utilised an algorithm to synthetically generate reduced visible road humps scenarios. We observed that our proposed framework effectively detected the marked and unmarked hump in the images in clear and ad-verse light environments. This architectural adaptation sets up an option for early detection of Indian road humps in reduced visibility conditions, thereby enhancing the autonomous driving technology to handle a wider range of real-world scenarios.Keywords: Indian road hump, reduced visibility condition, low light condition, adverse light condition, marked hump, unmarked hump, YOLOv9
Procedia PDF Downloads 275902 Study on the Effect of Bolt Locking Method on the Deformation of Bipolar Plate in PEMFC
Authors: Tao Chen, ShiHua Liu, JiWei Zhang
Abstract:
Assembly of the proton exchange membrane fuel cells (PEMFC) has a very important influence on its performance and efficiency. The various components of PEMFC stack are usually locked and fixed by bolts. Locking bolt will cause the deformation of the bipolar plate and the other components, which will affect directly the deformation degree of the integral parts of the PEMFC as well as the performance of PEMFC. This paper focuses on the object of three-cell stack of PEMFC. Finite element simulation is used to investigate the deformation of bipolar plate caused by quantity and layout of bolts, bolt locking pressure, and bolt locking sequence, etc. Finally, we made a conclusion that the optimal combination packaging scheme was adopted to assemble the fuel cell stack. The scheme was in use of 3.8 MPa locking pressure imposed on the fuel cell stack, type Ⅱ of four locking bolts and longitudinal locking method. The scheme was obtained by comparatively analyzing the overall displacement contour of PEMFC stack, absolute displacement curve of bipolar plate along the given three paths in the Z direction and the polarization curve of fuel cell. The research results are helpful for the fuel cell stack assembly.Keywords: bipolar plate, deformation, finite element simulation, fuel cell, locking bolt
Procedia PDF Downloads 4145901 Modelling Conceptual Quantities Using Support Vector Machines
Authors: Ka C. Lam, Oluwafunmibi S. Idowu
Abstract:
Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression
Procedia PDF Downloads 2065900 Implementing Two Rotatable Circular Polarized Glass Made Window to Reduce the Amount of Electricity Usage by Air Condition System
Authors: Imtiaz Sarwar
Abstract:
Air conditioning in homes may account for one-third of the electricity during period in summer when most of the energy is required in large cities. It is not consuming only electricity but also has a serious impact on environment including greenhouse effect. Circular polarizer filter can be used to selectively absorb or pass clockwise or counter-clock wise circularly polarized light. My research is about putting two circular polarized glasses parallel to each other and make a circular window with it. When we will place two circular polarized glasses exactly same way (0 degree to each other) then nothing will be noticed rather it will work as a regular window through which all light and heat can pass on. While we will keep rotating one of the circular polarized glasses, the angle between the glasses will keep increasing and the window will keep blocking more and more lights. It will completely block all the lights and a portion of related heat when one of the windows will reach 90 degree to another. On the other hand, we can just open the window when fresh air is necessary. It will reduce the necessity of using Air condition too much or consumer will use electric fan rather than air conditioning system. Thus, we can save a significant amount of electricity and we can go green.Keywords: circular polarizer, window, air condition, light, energy
Procedia PDF Downloads 6085899 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations
Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.
Abstract:
Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.Keywords: gamma incomplete, ewes, shape curves, modeling
Procedia PDF Downloads 785898 Development of Trigger Tool to Identify Adverse Drug Events From Warfarin Administered to Patient Admitted in Medical Wards of Chumphae Hospital
Authors: Puntarikorn Rungrattanakasin
Abstract:
Objectives: To develop the trigger tool to warn about the risk of bleeding as an adverse event from warfarin drug usage during admission in Medical Wards of Chumphae Hospital. Methods: A retrospective study was performed by reviewing the medical records for the patients admitted between June 1st,2020- May 31st, 2021. ADEs were evaluated by Naranjo’s algorithm. The international normalized ratio (INR) and events of bleeding during admissions were collected. Statistical analyses, including Chi-square test and Reciever Operating Characteristic (ROC) curve for optimal INR threshold, were used for the study. Results: Among the 139 admissions, the INR range was found to vary between 0.86-14.91, there was a total of 15 bleeding events, out of which 9 were mild, and 6 were severe. The occurrence of bleeding started whenever the INR was greater than 2.5 and reached the statistical significance (p <0.05), which was in concordance with the ROC curve and yielded 100 % sensitivity and 60% specificity in the detection of a bleeding event. In this regard, the INR greater than 2.5 was considered to be an optimal threshold to alert promptly for bleeding tendency. Conclusions: The INR value of greater than 2.5 (>2.5) would be an appropriate trigger tool to warn of the risk of bleeding for patients taking warfarin in Chumphae Hospital.Keywords: trigger tool, warfarin, risk of bleeding, medical wards
Procedia PDF Downloads 1485897 Mixtures of Length-Biased Weibull Distributions for Loss Severity Modelling
Authors: Taehan Bae
Abstract:
In this paper, a class of length-biased Weibull mixtures is presented to model loss severity data. The proposed model generalizes the Erlang mixtures with the common scale parameter, and it shares many important modelling features, such as flexibility to fit various data distribution shapes and weak-denseness in the class of positive continuous distributions, with the Erlang mixtures. We show that the asymptotic tail estimate of the length-biased Weibull mixture is Weibull-type, which makes the model effective to fit loss severity data with heavy-tailed observations. A method of statistical estimation is discussed with applications on real catastrophic loss data sets.Keywords: Erlang mixture, length-biased distribution, transformed gamma distribution, asymptotic tail estimate, EM algorithm, expectation-maximization algorithm
Procedia PDF Downloads 2245896 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter
Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai
Abstract:
Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking
Procedia PDF Downloads 4825895 The Effect of TiO₂ Nano-Thin Films on Light Transmission and Self-Cleaning Capabilities of Glass Surface
Authors: Ahmad Alduweesh
Abstract:
Self-cleaning surfaces have become essential in various applications. For instance, in photovoltaics, they provide an easy-cost effecting way to keep the solar cells clean. Titanium dioxide (TiO₂) nanoparticles were fabricated at different thicknesses to study the effect of different thicknesses on the hydrophilicity behavior of TiO₂, eventually leading to customizing hydrophilicity levels to desired values under natural light. As a result, a remarkable increase was noticed in surface hydrophilicity after applying thermal annealing on the as-deposited TiO₂ thin-films, with contact angle dropping from around 85.4ᵒ for as-deposited thin-films down to 5.1ᵒ for one of the annealed samples. The produced thin films were exposed to the outside environment to observe the effect of dust. The transmittance of light using UV-VIS spectroscopy will be conducted on the lowest and highest thicknesses (5-40 nm); this will show whether the Titania has successfully enabled more sunlight to penetrate the glass or not. Surface characterizations, including AFM and contact angle, have been included in this test.Keywords: physical vapor deposition, TiO₂, nano-thin films, hydrophobicity, hydrophilicity, self-cleaning surfaces
Procedia PDF Downloads 1155894 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making
Authors: Serhat Tuzun, Tufan Demirel
Abstract:
Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy
Procedia PDF Downloads 2255893 Effect of Infill Density and Pattern on the Compressive Strength of Parts Produced by Polylactic Acid Filament Using Fused Deposition Modelling
Authors: G. K. Awari, Vishwajeet V. Ambade, S. W. Rajurkar
Abstract:
The field of additive manufacturing is growing, and discoveries are being made. 3D printing machines are also being developed to accommodate a wider range of 3D printing materials, including plastics, metals (metal AM powders), composites, filaments, and other materials. There are numerous printing materials available for industrial additive manufacturing. Such materials have their unique characteristics, advantages, and disadvantages. In order to avoid errors in additive manufacturing, key elements such as 3D printing material type, texture, cost, printing technique and procedure, and so on must be examined. It can be complex to select the best material for a particular job. Polylactic acid (PLA) is made from sugar cane or cornstarch, both of which are renewable resources. "Black plastic" is another name for it. Because it is safe to use and print, it is frequently used in primary and secondary schools. This is also how FDM screen printing is done. PLA is simple to print because of its low warping impact. It's also possible to print it on a cold surface. When opposed to ABS, it allows for sharper edges and features to be printed. This material comes in a wide range of colours. Polylactic acid (PLA) is the most common material used in fused deposition modelling (FDM). PLA can be used to print a wide range of components, including medical implants, household items, and mechanical parts. The mechanical behaviour of the printed item is affected by variations in infill patterns that are subjected to compressive tests in the current investigation to examine their behaviour under compressive stresses.Keywords: fused deposition modelling, polylactic acid, infill density, infill pattern, compressive strength
Procedia PDF Downloads 755892 Evaluation of Different Anticoagulant Effects on Flow Properties of Human Blood Using Falling Needle Rheometer
Authors: Hiroki Tsuneda, Takamasa Suzuki, Hideki Yamamoto, Kimito Kawamura, Eiji Tamura, Katharina Wochner, Roberto Plasenzotti
Abstract:
Flow property of human blood is one of the important factors on the prevention of the circulatory condition such as a high blood pressure, a diabetes mellitus, and a cardiac infarction. However, the measurement of flow property of human blood, especially blood viscosity, is not so easy, because of their coagulation or aggregation behaviors after taking a sample from blood vessel. In the experiment, some kinds of anticoagulant were added into the human blood to avoid its solidification. Anticoagulant used in the blood test has been chosen for each purpose of blood test, for anticoagulant effect on blood is different mechanism for each. So that, there is a problem that the evaluation of measured blood property with different anticoagulant is so difficult. Therefore, it is so important to make clear the difference of anticoagulant effect on the blood property. In the previous work, a compact-size falling needle rheometer (FNR) has been developed in order to measure the flow property of human blood such as a flow curve, an apparent viscosity. It was found that FNR system can apply to a rheometer or a viscometry for various experimental conditions for not only human blood but also mammalians blood. In this study, the measurements of human blood viscosity with different anticoagulant (EDTA and Heparin) were carried out using newly developed FNR system. The effect of anticoagulant on blood viscosity was also tested by using the standard liquid for each. The accuracy on the viscometry was also tested by using the standard liquid for calibrating materials (JS-10, JS-20) and observed data have satisfactory agreement with reference data around 1.0% at 310K. The flow curve of six males and females with different anticoagulant were measured using FNR. In this experiment, EDTA and Heparin were chosen as anticoagulant for blood. Heparin can inhibit the coagulation of human blood by activating the body of anti-thrombin. To examine the effect of human blood viscosity on anticoagulant, flow curve was measured at high shear rate (>350s-1), and apparent viscosity of each person were determined with different anticoagulant. The apparent viscosity of human blood with heparin was 2%-9% higher than that with EDTA. However, the difference of blood viscosity for two anticoagulants for same blood was different for each. Further discussion, we need the consideration of effect on other physical property, such as cellular component and plasma component.Keywords: falling-needle rheometer, human blood, viscosity, anticoagulant
Procedia PDF Downloads 4425891 Studies on the Emergence Pattern of Cercariae from Fresh Water Snails (Mollusca: Gastropoda)
Authors: V. R. Kakulte, K. N. Gaikwad
Abstract:
The emergence pattern of different types of cercariae form three snail hosts Melania tuberculata, Lymnea auricularia Viviparous bengalensis has been studied in detail. In natural emerging method the snails (2 to 3 at a time) were kept in separate test tube. This was constant source of living cercariae naturally emerging from the snails. The sunlight and artificial light play an important positive role in stimulating the emergence of cercariae has been observed. The effect of light and dark on the emission pattern of cercariae has been studied.Keywords: cercariae, snail host, emergence pattern, gastropoda
Procedia PDF Downloads 3175890 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor
Authors: Pooja Seth, Shruti Aggarwal
Abstract:
Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method
Procedia PDF Downloads 4055889 Two-Channels Thermal Energy Storage Tank: Experiments and Short-Cut Modelling
Authors: M. Capocelli, A. Caputo, M. De Falco, D. Mazzei, V. Piemonte
Abstract:
This paper presents the experimental results and the related modeling of a thermal energy storage (TES) facility, ideated and realized by ENEA and realizing the thermocline with an innovative geometry. Firstly, the thermal energy exchange model of an equivalent shell & tube heat exchanger is described and tested to reproduce the performance of the spiral exchanger installed in the TES. Through the regression of the experimental data, a first-order thermocline model was also validated to provide an analytical function of the thermocline, useful for the performance evaluation and the comparison with other systems and implementation in simulations of integrated systems (e.g. power plants). The experimental data obtained from the plant start-up and the short-cut modeling of the system can be useful for the process analysis, for the scale-up of the thermal storage system and to investigate the feasibility of its implementation in actual case-studies.Keywords: CSP plants, thermal energy storage, thermocline, mathematical modelling, experimental data
Procedia PDF Downloads 3295888 Deep Foundations: Analysis of the Lateral Response of Closed Ended Steel Tubular Piles Embedded in Sandy Soil Using P-Y Curves
Authors: Ameer A. Jebur, William Atherton, Rafid M. Alkhaddar, Edward Loffill
Abstract:
Understanding the behaviour of the piles under the action of the independent lateral loads and the precise prediction of the capacity of piles subjected to different lateral loads are vital topics in foundation design and analysis. Moreover, the laterally loaded behaviour of deep foundations penetrated in cohesive and non-cohesive soils is basically analysed by the Winkler Model (beam on elastic foundation), in which the interaction between the pile embedded depth and contacted soil is simulated by nonlinear p–y curves. The presence of many approaches to interpret the behaviour of soil-pile interaction has resulted in numerous outputs and indicates that no general approach has yet been adopted. The current study presents the result of numerical modelling of the behaviour of steel tubular piles (25.4mm) outside diameter with various embedment depth-to-diameter ratios (L/d) embedded in a sand calibrated chamber of known relative density. The study revealed that the shear strength parameters of the sand specimens and the (L/d) ratios are the most significant factor influencing the response of the pile and its capacity while taking into consideration the complex interaction between the pile and soil. Good agreement has been achieved when comparing the application of this modelling approach with experimental physical modelling carried out by another researcher.Keywords: deep foundations, slenderness ratio, soil-pile interaction, winkler model (beam on elastic foundation), non-cohesive soil
Procedia PDF Downloads 299