Search results for: curve approximation
279 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution
Authors: Masomeh Jamshid Nejad
Abstract:
Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.Keywords: statistics, excel-based instruction, data visualization, pedagogy
Procedia PDF Downloads 53278 By Removing High-Performance Aerobic Scope Phenotypes, Capture Fisheries May Reduce the Resilience of Fished Populations to Thermal Variability and Compromise Their Persistence into the Anthropocene.
Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts
Abstract:
For the persistence of fished populations in the Anthropocene, it is critical to predict how fished populations will respond to the coupled threats of exploitation and climate change for adaptive management. The resilience of fished populations will depend on their capacity for physiological plasticity and acclimatization in response to environmental shifts. However, there is evidence for the selection of physiological traits by capture fisheries. Hence, fish populations may have a limited scope for the rapid expansion of their tolerance ranges or physiological adaptation under fishing pressures. To determine the physiological vulnerability of fished populations in the Anthropocene, the metabolic performance was compared between a fished and spatially protected Chrysoblephus laticeps population in response to thermal variability. Individual aerobic scope phenotypes were quantified using intermittent flow respirometry by comparing changes in energy expenditure of each individual at ecologically relevant temperatures, mimicking variability experienced as a result of upwelling and downwelling events. The proportion of high and low-performance individuals were compared between the fished and spatially protected population. The fished population had limited aerobic scope phenotype diversity and fewer high-performance phenotypes, resulting in a significantly lower aerobic scope curve across low (10 °C) and high (24 °C) thermal treatments. The performance of fished populations may be compromised with predicted future increases in cold upwelling events. This requires the conservation of the physiologically fittest individuals in spatially protected areas, which can recruit into nearby fished areas, as a climate resilience tool.Keywords: climate change, fish physiology, metabolic shifts, over-fishing, respirometry
Procedia PDF Downloads 128277 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model
Authors: Sidrah Ahmed
Abstract:
The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients
Procedia PDF Downloads 203276 Development of a Robust Protein Classifier to Predict EMT Status of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC) Tumors
Authors: ZhenlinJu, Christopher P. Vellano, RehanAkbani, Yiling Lu, Gordon B. Mills
Abstract:
The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal characteristics, such as profound disruption of cell-cell junctions, loss of apical-basolateral polarity, and extensive reorganization of the actin cytoskeleton to induce cell motility and invasion. A hallmark of EMT is its capacity to promote metastasis, which is due in part to activation of several transcription factors and subsequent downregulation of E-cadherin. Unfortunately, current approaches have yet to uncover robust protein marker sets that can classify tumors as possessing strong EMT signatures. In this study, we utilize reverse phase protein array (RPPA) data and consensus clustering methods to successfully classify a subset of cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) tumors into an EMT protein signaling group (EMT group). The overall survival (OS) of patients in the EMT group is significantly worse than those in the other Hormone and PI3K/AKT signaling groups. In addition to a shrinkage and selection method for linear regression (LASSO), we applied training/test set and Monte Carlo resampling approaches to identify a set of protein markers that predicts the EMT status of CESC tumors. We fit a logistic model to these protein markers and developed a classifier, which was fixed in the training set and validated in the testing set. The classifier robustly predicted the EMT status of the testing set with an area under the curve (AUC) of 0.975 by Receiver Operating Characteristic (ROC) analysis. This method not only identifies a core set of proteins underlying an EMT signature in cervical cancer patients, but also provides a tool to examine protein predictors that drive molecular subtypes in other diseases.Keywords: consensus clustering, TCGA CESC, Silhouette, Monte Carlo LASSO
Procedia PDF Downloads 468275 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)
Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini
Abstract:
Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria
Procedia PDF Downloads 102274 Specific Language Impairment: Assessing Bilingual Children for Identifying Children with Specific Language Impairment (SLI)
Authors: Manish Madappa, Madhavi Gayathri Raman
Abstract:
The primary vehicle of human communication is language. A breakdown occurring in any aspect of communication may lead to frustration and isolation among the learners and the teachers. Over seven percent of the population in the world currently experience limitations and those children who exhibit a deviant/deficient language acquisition curve even when being in a language rich environment as their peers may be at risk of having a language disorder or language impairment. The difficulty may be in the word level [vocabulary/word knowledge] and/or the sentence level [syntax/morphology) Children with SLI appear to be developing normally in all aspects except for their receptive and/or expressive language skills. Thus, it is utmost importance to identify children with or at risk of SLI so that an early intervention can foster language and social growth, provide the best possible learning environment with special support for language to be explicitly taught and a step in providing continuous and ongoing support. The present study looks at Kannada English bilingual children and works towards identifying children at risk of “specific language impairment”. The study was conducted through an exploratory study which systematically enquired into the narratives of young Kannada-English bilinguals and to investigate the data for story structure in their narrative formulations. Oral narrative offers a rich source of data about a child’s language use in a relatively natural context. The fundamental objective is to ensure comparability and to be more universal and thus allows for the evaluation narrative text competence. The data was collected from 10 class three students at a primary school in Mysore, Karnataka and analyzed for macrostructure component reflecting the goal directed behavior of a protagonist who is motivated to carry out some kind of action with the intention of attaining a goal. The results show that the children exhibiting a deviation of -1.25 SD are at risk of SLI. Two learners were identified to be at risk of Specific Language Impairment with a standard deviation of more the 1.25 below the mean score.Keywords: bilingual, oral narratives, SLI, macrostructure
Procedia PDF Downloads 288273 Role of P53, KI67 and Cyclin a Immunohistochemical Assay in Predicting Wilms’ Tumor Mortality
Authors: Ahmed Atwa, Ashraf Hafez, Mohamed Abdelhameed, Adel Nabeeh, Mohamed Dawaba, Tamer Helmy
Abstract:
Introduction and Objective: Tumour staging and grading do not usually reflect the future behavior of Wilms' tumor (WT) regarding mortality. Therefore, in this study, P53, Ki67 and cyclin A immunohistochemistry were used in a trial to predict WT cancer-specific survival (CSS). Methods: In this nonconcurrent cohort study, patients' archived data, including age at presentation, gender, history, clinical examination and radiological investigations, were retrieved then the patients were reviewed at the outpatient clinic of a tertiary care center by history-taking, clinical examination and radiological investigations to detect the oncological outcome. Cases that received preoperative chemotherapy or died due to causes other than WT were excluded. Formalin-fixed, paraffin-embedded specimens obtained from the previously preserved blocks at the pathology laboratory were taken on positively charged slides for IHC with p53, Ki67 and cyclin A. All specimens were examined by an experienced histopathologist devoted to the urological practice and blinded to the patient's clinical findings. P53 and cyclin A staining were scored as 0 (no nuclear staining),1 (<10% nuclear staining), 2 (10-50% nuclear staining) and 3 (>50% nuclear staining). Ki67 proliferation index (PI) was graded as low, borderline and high. Results: Of the 75 cases, 40 (53.3%) were males and 35 (46.7%) were females, and the median age was 36 months (2-216). With a mean follow-up of 78.6±31 months, cancer-specific mortality (CSM) occurred in 15 (20%) and 11 (14.7%) patients, respectively. Kaplan-Meier curve was used for survival analysis, and groups were compared using the Log-rank test. Multivariate logistic regression and Cox regression were not used because only one variable (cyclin A) had shown statistical significance (P=.02), whereas the other significant factor (residual tumor) had few cases. Conclusions: Cyclin A IHC should be considered as a marker for the prediction of WT CSS. Prospective studies with a larger sample size are needed.Keywords: wilms’ tumour, nephroblastoma, urology, survival
Procedia PDF Downloads 67272 Quantum Chemical Investigation of Hydrogen Isotopes Adsorption on Metal Ion Functionalized Linde Type A and Faujasite Type Zeolites
Authors: Gayathri Devi V, Aravamudan Kannan, Amit Sircar
Abstract:
In the inner fuel cycle system of a nuclear fusion reactor, the Hydrogen Isotopes Removal System (HIRS) plays a pivoted role. It enables the effective extraction of the hydrogen isotopes from the breeder purge gas which helps to maintain the tritium breeding ratio and sustain the fusion reaction. One of the components of HIRS, Cryogenic Molecular Sieve Bed (CMSB) columns with zeolites adsorbents are considered for the physisorption of hydrogen isotopes at 1 bar and 77 K. Even though zeolites have good thermal stability and reduced activation properties making them ideal for use in nuclear reactor applications, their modest capacity for hydrogen isotopes adsorption is a cause of concern. In order to enhance the adsorbent capacity in an informed manner, it is helpful to understand the adsorption phenomena at the quantum electronic structure level. Physicochemical modifications of the adsorbent material enhances the adsorption capacity through the incorporation of active sites. This may be accomplished through the incorporation of suitable metal ions in the zeolite framework. In this work, molecular hydrogen isotopes adsorption on the active sites of functionalized zeolites are investigated in detail using Density Functional Theory (DFT) study. This involves the utilization of hybrid Generalized Gradient Approximation (GGA) with dispersion correction to account for the exchange and correlation functional of DFT. The electronic energies, adsorption enthalpy, adsorption free energy, Highest Occupied Molecular Orbital (HOMO), Lowest Unoccupied Molecular Orbital (LUMO) energies are computed on the stable 8T zeolite clusters as well as the periodic structure functionalized with different active sites. The characteristics of the dihydrogen bond with the active metal sites and the isotopic effects are also studied in detail. Validation studies with DFT will also be presented for adsorption of hydrogen on metal ion functionalized zeolites. The ab-inito screening analysis gave insights regarding the mechanism of hydrogen interaction with the zeolites under study and also the effect of the metal ion on adsorption. This detailed study provides guidelines for selection of the appropriate metal ions that may be incorporated in the zeolites framework for effective adsorption of hydrogen isotopes in the HIRS.Keywords: adsorption enthalpy, functionalized zeolites, hydrogen isotopes, nuclear fusion, physisorption
Procedia PDF Downloads 179271 Urea and Starch Detection on a Paper-Based Microfluidic Device Enabled on a Smartphone
Authors: Shashank Kumar, Mansi Chandra, Ujjawal Singh, Parth Gupta, Rishi Ram, Arnab Sarkar
Abstract:
Milk is one of the basic and primary sources of food and energy as we start consuming milk from birth. Hence, milk quality and purity and checking the concentration of its constituents become necessary steps. Considering the importance of the purity of milk for human health, the following study has been carried out to simultaneously detect and quantify the different adulterants like urea and starch in milk with the help of a paper-based microfluidic device integrated with a smartphone. The detection of the concentration of urea and starch is based on the principle of colorimetry. In contrast, the fluid flow in the device is based on the capillary action of porous media. The microfluidic channel proposed in the study is equipped with a specialized detection zone, and it employs a colorimetric indicator undergoing a visible color change when the milk gets in touch or reacts with a set of reagents which confirms the presence of different adulterants in the milk. In our proposed work, we have used iodine to detect the percentage of starch in the milk, whereas, in the case of urea, we have used the p-DMAB. A direct correlation has been found between the color change intensity and the concentration of adulterants. A calibration curve was constructed to find color intensity and subsequent starch and urea concentration. The device has low-cost production and easy disposability, which make it highly suitable for widespread adoption, especially in resource-constrained settings. Moreover, a smartphone application has been developed to detect, capture, and analyze the change in color intensity due to the presence of adulterants in the milk. The low-cost nature of the smartphone-integrated paper-based sensor, coupled with its integration with smartphones, makes it an attractive solution for widespread use. They are affordable, simple to use, and do not require specialized training, making them ideal tools for regulatory bodies and concerned consumers.Keywords: paper based microfluidic device, milk adulteration, urea detection, starch detection, smartphone application
Procedia PDF Downloads 65270 Shear Strength Parameters of an Unsaturated Lateritic Soil
Authors: Jeferson Brito Fernades, Breno Padovezi Rocha, Roger Augusto Rodrigues, Heraldo Luiz Giacheti
Abstract:
The geotechnical projects demand the appropriate knowledge of soil characteristics and parameters. The determination of geotechnical soil parameters can be done by means of laboratory or in situ tests. In countries with tropical weather, like Brazil, unsaturated soils are very usual. In these soils, the soil suction has been recognized as an important stress state variable, which commands the geo-mechanical behavior. Triaxial and direct shear tests on saturated soils samples allow determine only the minimal soil shear strength, in other words, no suction contribution. This paper briefly describes the triaxial test with controlled suction as well as discusses the influence of suction on the shear strength parameters of a lateritic tropical sandy soil from a Brazilian research site. In this site, a sample pit was excavated to retrieve disturbed and undisturbed soil blocks. The samples extracted from these blocks were tested in laboratory to represent the soil from 1.5, 3.0 and 5.0 m depth. The stress curves and shear strength envelopes determined by triaxial tests varying suction and confining pressure are presented and discussed. The water retention characteristics on this soil complement this analysis. In situ CPT tests were also carried out at this site in different seasons of the year. In this case, the soil suction profile was determined by means of the soil water retention. This extra information allowed assessing how soil suction also affected the CPT data and the shear strength parameters estimative via correlation. The major conclusions of this paper are: the undisturbed soil samples contracted before shearing and the soil shear strength increased hyperbolically with suction; and it was possible to assess how soil suction also influenced CPT test data based on the water content soil profile as well as the water retention curve. This study contributed with a better understanding of the shear strength parameters and the soil variability of a typical unsaturated tropical soil.Keywords: site characterization, triaxial test, CPT, suction, variability
Procedia PDF Downloads 416269 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer
Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek
Abstract:
Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying
Procedia PDF Downloads 126268 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 55267 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74266 Comparison of Stereotactic Body Radiation Therapy Virtual Treatment Plans Obtained With Different Collimators in the Cyberknife System in Partial Breast Irradiation: A Retrospective Study
Authors: Öznur Saribaş, Si̇bel Kahraman Çeti̇ntaş
Abstract:
It is aimed to compare target volume and critical organ doses by using CyberKnife (CK) in accelerated partial breast irradiation (APBI) in patients with early stage breast cancer. Three different virtual plans were made for Iris, fixed and multi-leaf collimator (MLC) for 5 patients who received radiotherapy in the CyberKnife system. CyberKnife virtual plans were created, with 6 Gy per day totaling 30 Gy. Dosimetric parameters for the three collimators were analyzed according to the restrictions in the NSABP-39/RTOG 0413 protocol. The plans ensured critical organs were protected and GTV received 95 % of the prescribed dose. The prescribed dose was defined by the isodose curve of a minimum of 80. Homogeneity index (HI), conformity index (CI), treatment time (min), monitor unit (MU) and doses taken by critical organs were compared. As a result of the comparison of the plans, a significant difference was found for the duration of treatment, MU. However, no significant difference was found for HI, CI. V30 and V15 values of the ipsi-lateral breast were found in the lowest MLC. There was no significant difference between Dmax values for lung and heart. However, the mean MU and duration of treatment were found in the lowest MLC. As a result, the target volume received the desired dose in each collimator. The contralateral breast and contralateral lung doses were the lowest in the Iris. Fixed collimator was found to be more suitable for cardiac doses. But these values did not make a significant difference. The use of fixed collimators may cause difficulties in clinical applications due to the long treatment time. The choice of collimator in breast SBRT applications with CyberKnife may vary depending on tumor size, proximity to critical organs and tumor localization.Keywords: APBI, CyberKnife, early stage breast cancer, radiotherapy.
Procedia PDF Downloads 118265 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India
Authors: Zafar Beg, Kumar Gaurav
Abstract:
The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River
Procedia PDF Downloads 179264 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 512263 Study of the Relationship between the Civil Engineering Parameters and the Floating of Buoy Model Which Made from Expanded Polystyrene-Mortar
Authors: Panarat Saengpanya
Abstract:
There were five objectives in this study including the study of housing type with water environment, the physical and mechanical properties of the buoy material, the mechanical properties of the buoy models, the floating of the buoy models and the relationship between the civil engineering parameters and the floating of the buoy. The buoy examples made from Expanded Polystyrene (EPS) covered by 5 mm thickness of mortar with the equal thickness on each side. Specimens are 0.05 m cubes tested at a displacement rate of 0.005 m/min. The existing test method used to assess the parameters relationship is ASTM C 109 to provide comparative results. The results found that the three type of housing with water environment were Stilt Houses, Boat House, and Floating House. EPS is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of mortar, while the mortar strength was found 72 times of EPS. One of the advantage of composite is that two or more materials could be combined to take advantage of the good characteristics of each of the material. The strength of the buoy influenced by mortar while the floating influenced by EPS. Results showed the buoy example compressed under loading. The Stress-Strain curve showed the high secant modulus before reached the peak value. The failure occurred within 10% strain then the strength reduces while the strain was continuing. It was observed that the failure strength reduced by increasing the total volume of examples. For the buoy examples with same area, an increase of the failure strength is found when the high dimension is increased. The results showed the relationship between five parameters including the floating level, the bearing capacity, the volume, the high dimension and the unit weight. The study found increases in high of buoy lead to corresponding decreases in both modulus and compressive strength. The total volume and the unit weight had relationship with the bearing capacity of the buoy.Keywords: floating house, buoy, floating structure, EPS
Procedia PDF Downloads 146262 Experimental Investigation on the Effect of Cross Flow on Discharge Coefficient of an Orifice
Authors: Mathew Saxon A, Aneeh Rajan, Sajeev P
Abstract:
Many fluid flow applications employ different types of orifices to control the flow rate or to reduce the pressure. Discharge coefficients generally vary from 0.6 to 0.95 depending on the type of the orifice. The tabulated value of discharge coefficients of various types of orifices available can be used in most common applications. The upstream and downstream flow condition of an orifice is hardly considered while choosing the discharge coefficient of an orifice. But literature shows that the discharge coefficient can be affected by the presence of cross flow. Cross flow is defined as the condition wherein; a fluid is injected nearly perpendicular to a flowing fluid. Most researchers have worked on water being injected into a cross-flow of water. The present work deals with water to gas systems in which water is injected in a normal direction into a flowing stream of gas. The test article used in the current work is called thermal regulator, which is used in a liquid rocket engine to reduce the temperature of hot gas tapped from the gas generator by injecting water into the hot gas so that a cooler gas can be supplied to the turbine. In a thermal regulator, water is injected through an orifice in a normal direction into the hot gas stream. But the injection orifice had been calibrated under backpressure by maintaining a stagnant gas medium at the downstream. The motivation of the present study aroused due to the observation of a lower Cd of the orifice in flight compared to the calibrated Cd. A systematic experimental investigation is carried out in this paper to study the effect of cross-flow on the discharge coefficient of an orifice in water to a gas system. The study reveals that there is an appreciable reduction in the discharge coefficient with cross flow compared to that without cross flow. It is found that the discharge coefficient greatly depends on the ratio of momentum of water injected to the momentum of the gas cross flow. The effective discharge coefficient of different orifices was normalized using the discharge coefficient without cross-flow and it is observed that normalized curves of effective discharge coefficient of different orifices with momentum ratio collapsing into a single curve. Further, an equation is formulated using the test data to predict the effective discharge coefficient with cross flow using the calibrated Cd value without cross flow.Keywords: cross flow, discharge coefficient, orifice, momentum ratio
Procedia PDF Downloads 142261 Estimates of Freshwater Content from ICESat-2 Derived Dynamic Ocean Topography
Authors: Adan Valdez, Shawn Gallaher, James Morison, Jordan Aragon
Abstract:
Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport and modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116 km3/year. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff. The total climatological freshwater content is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity driven pycnocline as opposed to the temperature driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and remotely sensed dynamic ocean topography (DOT). In-situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time consuming. NASA’s Advanced Topographic Laser Altimeter System (ATLAS) derived dynamic ocean topography (DOT), and Air Expendable CTD (AXCTD) derived Freshwater Content are used to develop a linear regression model. In-situ data for the regression model is collected across the 150° West meridian, which typically defines the centerline of the Beaufort Gyre. Two freshwater content models are determined by integrating the freshwater volume between the surface and an isopycnal corresponding to reference salinities of 28.7 and 34.8. These salinities correspond to those of the winter pycnocline and total climatological freshwater content, respectively. Using each model, we determine the strength of the linear relationship between freshwater content and satellite derived DOT. The result of this modeling study could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non in-situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially reduce reliance on field deployment platforms to characterize physical ocean properties.Keywords: ICESat-2, dynamic ocean topography, freshwater content, beaufort gyre
Procedia PDF Downloads 85260 Microscopic Analysis of Interfacial Transition Zone of Cementitious Composites Prepared by Various Mixing Procedures
Authors: Josef Fládr, Jiří Němeček, Veronika Koudelková, Petr Bílý
Abstract:
Mechanical parameters of cementitious composites differ quite significantly based on the composition of cement matrix. They are also influenced by mixing times and procedure. The research presented in this paper was aimed at identification of differences in microstructure of normal strength (NSC) and differently mixed high strength (HSC) cementitious composites. Scanning electron microscopy (SEM) investigation together with energy dispersive X-ray spectroscopy (EDX) phase analysis of NSC and HSC samples was conducted. Evaluation of interfacial transition zone (ITZ) between the aggregate and cement matrix was performed. Volume share, thickness, porosity and composition of ITZ were studied. In case of HSC, samples obtained by several different mixing procedures were compared in order to find the most suitable procedure. In case of NSC, ITZ was identified around 40-50% of aggregate grains and its thickness typically ranged between 10 and 40 µm. Higher porosity and lower share of clinker was observed in this area as a result of increased water-to-cement ratio (w/c) and the lack of fine particles improving the grading curve of the aggregate. Typical ITZ with lower content of Ca was observed only in one HSC sample, where it was developed around less than 15% of aggregate grains. The typical thickness of ITZ in this sample was similar to ITZ in NSC (between 5 and 40 µm). In the remaining four HSC samples, no ITZ was observed. In general, the share of ITZ in HSC samples was found to be significantly smaller than in NSC samples. As ITZ is the weakest part of the material, this result explains to large extent the improved mechanical properties of HSC compared to NSC. Based on the comparison of characteristics of ITZ in HSC samples prepared by different mixing procedures, the most suitable mixing procedure from the point of view of properties of ITZ was identified.Keywords: electron diffraction spectroscopy, high strength concrete, interfacial transition zone, normal strength concrete, scanning electron microscopy
Procedia PDF Downloads 292259 An Experimental Investigation of Rehabilitation and Strengthening of Reinforced Concrete T-Beams Under Static Monotonic Increasing Loading
Authors: Salem Alsanusi, Abdulla Alakad
Abstract:
An experimental investigation to study the behaviour of under flexure reinforced concrete T-Beams. Those Beams were loaded to pre-designated stress levels as percentage of calculated collapse loads. Repairing these beans by either reinforced concrete jacket, or by externally bolted steel plates were utilized. Twelve full scale beams were tested in this experimental program scheme. Eight out of the twelve beams were loaded under different loading levels. Tests were performed for the beams before and after repair with Reinforced Concrete Jacket (RCJ). The applied Load levels were 60%, 77% and 100% of the calculated collapse loads. The remaining four beams were tested before and after repair with Bolted Steel Plate (BSP). Furthermore, out previously mentioned four beams two beams were loaded to the calculated failure load 100% and the remaining two beams were not subjected to any load. The eight beams recorded for the RCJ test were repaired using reinforced concrete jacket. The four beams recorded for the BSP test were all repaired using steel plate at the bottom. All the strengthened beams were gradually loaded until failure occurs. However, in each loading case, the beams behaviour, before and after strengthening, were studied through close inspection of the cracking propagation, and by carrying out an extensive measurement of deformations and strength. The stress-strain curve for reinforcing steel and the failure strains measured in the tests were utilized in the calculation of failure load for the beams before and after strengthening. As a result, the calculated failure loads were close to the actual failure tests in case of beams before repair, ranging from 85% to 90% and also in case of beams repaired by reinforced concrete jacket ranging from 70% to 85%. The results were in case of beams repaired by bolted steel plates ranging from (50% to 85%). It was observed that both jacketing and bolted steel plate methods could effectively restore the full flexure capacity of the damaged beams. However, the reinforced jacket has increased the failure load by about 67%, whereas the bolted steel plates recovered the failure load.Keywords: rehabilitation, strengthening, reinforced concrete, beams deflection, bending stresses
Procedia PDF Downloads 306258 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 140257 Microstructure of Virgin and Aged Asphalts by Small-Angle X-Ray Scattering
Authors: Dong Tang, Yongli Zhao
Abstract:
The study of the microstructure of asphalt is of great importance for the analysis of its macroscopic properties. However, the peculiarities of the chemical composition of the asphalt itself and the limitations of existing direct imaging techniques have caused researchers to face many obstacles in studying the microstructure of asphalt. The advantage of small-angle X-ray scattering (SAXS) is that it allows quantitative determination of the internal structure of opaque materials and is suitable for analyzing the microstructure of materials. Therefore, the SAXS technique was used to study the evolution of microstructures on the nanoscale during asphalt aging. And the reasons for the change in scattering contrast during asphalt aging were also explained with the help of Fourier transform infrared spectroscopy (FTIR). SAXS experimental results show that the SAXS curves of asphalt are similar to the scattering curves of scattering objects with two-level structures. The Porod curve for asphalt shows that there is no obvious interface between the micelles and the surrounding mediums, and there is only a fluctuation of the hot electron density between the two. The Beaucage model fit SAXS patterns shows that the scattering coefficient P of the asphaltene clusters as well as the size of the micelles, gradually increase with the aging of the asphalt. Furthermore, aggregation exists between the micelles of asphalt and becomes more pronounced with increasing aging. During asphalt aging, the electron density difference between the micelles and the surrounding mediums gradually increases, leading to an increase in the scattering contrast of the asphalt. Under long-term aging conditions due to the gradual transition from maltenes to asphaltenes, the electron density difference between the micelles and the surrounding mediums decreases, resulting in a decrease in the scattering contrast of asphalt SAXS. Finally, this paper correlates the macroscopic properties of asphalt with microstructural parameters, and the results show that the high-temperature rutting resistance of asphalt is enhanced and the low-temperature cracking resistance decreases due to the aggregation of micelles and the generation of new micelles. These results are useful for understanding the relationship between changes in microstructure and changes in properties during asphalt aging and provide theoretical guidance for the regeneration of aged asphalt.Keywords: asphalt, Beaucage model, microstructure, SAXS
Procedia PDF Downloads 80256 Hypertension and Obesity: A Cross-National Comparison of BMI and Waist-Height Ratio
Authors: Adam M. Yates, Julie E. Byles
Abstract:
Hypertension has been identified as a prominent co-morbidity of obesity. To improve clinical intervention of hypertension, it is critical to identify metrics that most accurately reflect risk for increased morbidity. Two of the most relevant and accurate measures for increased risk of hypertension due to excess adipose tissue are Body Mass Index (BMI) and Waist-Height Ratio (WHtR). Previous research has examined these measures in cross-national and cross-ethnic studies, but has most often relied on secondary means such as meta-analysis to identify and evaluate the efficacy of individual body mass measures. In this study, we instead use cross-sectional analysis to assess the cross-ethnic discriminative power of BMI and WHtR to predict risk of hypertension. Using the WHO SAGE survey, which collected anthropometric and biometric data from respondents in six middle-income countries (China, Ghana, India, Mexico, Russia, South Africa), we implement logistic regression to examine the discriminative power of measured BMI and WHtR with a known population of hypertensive and non-hypertensive respondents. We control for gender and age to identify whether optimum cut-off points that are adequately sensitive as tests for risk of hypertension may be different between groups. We report results for OR, RR, and ROC curves for each of the six SAGE countries. As seen in existing literature, results demonstrate that both WHtR and BMI are significant predictors of hypertension (p < .01). For these six countries, we find that cut-off points for WHtR may be dependent upon gender, age and ethnicity. While an optimum omnibus cut-point for WHtR may be 0.55, results also suggest that the gender and age relationship with WHtR may warrant the development of individual cut-offs to optimize health outcomes. Trends through multiple countries show that the optimum cut-point for WHtR increases with age while the area under the curve (AUROC) decreases for both men and women. Comparison between BMI and WHtR indicate that BMI may remain more robust than WHtR. Implications for public health policy are discussed.Keywords: hypertension, obesity, Waist-Height ratio, SAGE
Procedia PDF Downloads 478255 The Optimal Order Policy for the Newsvendor Model under Worker Learning
Authors: Sunantha Teyarachakul
Abstract:
We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.Keywords: inventory management, Newsvendor model, order policy, worker learning
Procedia PDF Downloads 416254 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 99253 Targeting Mre11 Nuclease Overcomes Platinum Resistance and Induces Synthetic Lethality in Platinum Sensitive XRCC1 Deficient Epithelial Ovarian Cancers
Authors: Adel Alblihy, Reem Ali, Mashael Algethami, Ahmed Shoqafi, Michael S. Toss, Juliette Brownlie, Natalie J. Tatum, Ian Hickson, Paloma Ordonez Moran, Anna Grabowska, Jennie N. Jeyapalan, Nigel P. Mongan, Emad A. Rakha, Srinivasan Madhusudan
Abstract:
Platinum resistance is a clinical challenge in ovarian cancer. Platinating agents induce DNA damage which activate Mre11 nuclease directed DNA damage signalling and response (DDR). Upregulation of DDR may promote chemotherapy resistance. Here we have comprehensively evaluated Mre11 in epithelial ovarian cancers. In clinical cohort that received platinum- based chemotherapy (n=331), Mre11 protein overexpression was associated with aggressive phenotype and poor progression free survival (PFS) (p=0.002). In the ovarian cancer genome atlas (TCGA) cohort (n=498), Mre11 gene amplification was observed in a subset of serous tumours (5%) which correlated highly with Mre11 mRNA levels (p<0.0001). Altered Mre11 levels was linked with genome wide alterations that can influence platinum sensitivity. At the transcriptomic level (n=1259), Mre11 overexpression was associated with poor PFS (p=0.003). ROC analysis showed an area under the curve (AUC) of 0.642 for response to platinum-based chemotherapy. Pre-clinically, Mre11 depletion by gene knock down or blockade by small molecule inhibitor (Mirin) reversed platinum resistance in ovarian cancer cells and in 3D spheroid models. Importantly, Mre11 inhibition was synthetically lethal in platinum sensitive XRCC1 deficient ovarian cancer cells and 3D-spheroids. Selective cytotoxicity was associated with DNA double strand break (DSB) accumulation, S-phase cell cycle arrest and increased apoptosis. We conclude that pharmaceutical development of Mre11 inhibitors is a viable clinical strategy for platinum sensitization and synthetic lethality in ovarian cancer.Keywords: MRE11; XRCC1, ovarian cancer, platinum sensitization, synthetic lethality
Procedia PDF Downloads 129252 Predictive Value Modified Sick Neonatal Score (MSNS) On Critically Ill Neonates Outcome Treated in Neonatal Intensive Care Unit (NICU)
Authors: Oktavian Prasetia Wardana, Martono Tri Utomo, Risa Etika, Kartika Darma Handayani, Dina Angelika, Wurry Ayuningtyas
Abstract:
Background: Critically ill neonates are newborn babies with high-risk factors that potentially cause disability and/or death. Scoring systems for determining the severity of the disease have been widely developed as well as some designs for use in neonates. The SNAPPE-II method, which has been used as a mortality predictor scoring system in several referral centers, was found to be slow in assessing the outcome of critically ill neonates in the Neonatal Intensive Care Unit (NICU). Objective: To analyze the predictive value of MSNS on the outcome of critically ill neonates at the time of arrival up to 24 hours after being admitted to the NICU. Methods: A longitudinal observational analytic study based on medical record data was conducted from January to August 2022. Each sample was recorded from medical record data, including data on gestational age, mode of delivery, APGAR score at birth, resuscitation measures at birth, duration of resuscitation, post-resuscitation ventilation, physical examination at birth (including vital signs and any congenital abnormalities), the results of routine laboratory examinations, as well as the neonatal outcomes. Results: This study involved 105 critically ill neonates who were admitted to the NICU. The outcome of critically ill neonates was 50 (47.6%) neonates died, and 55 (52.4%) neonates lived. There were more males than females (61% vs. 39%). The mean gestational age of the subjects in this study was 33.8 ± 4.28 weeks, with the mean birth weight of the subjects being 1820.31 ± 33.18 g. The mean MSNS score of neonates with a deadly outcome was lower than that of the lived outcome. ROC curve with a cut point MSNS score <10.5 obtained an AUC of 93.5% (95% CI: 88.3-98.6) with a sensitivity value of 84% (95% CI: 80.5-94.9), specificity 80 % (CI 95%: 88.3-98.6), Positive Predictive Value (PPV) 79.2%, Negative Predictive Value (NPV) 84.6%, Risk Ratio (RR) 5.14 with Hosmer & Lemeshow test results p>0.05. Conclusion: The MSNS score has a good predictive value and good calibration of the outcomes of critically ill neonates admitted to the NICU.Keywords: critically ill neonate, outcome, MSNS, NICU, predictive value
Procedia PDF Downloads 69251 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation
Abstract:
Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.Keywords: gene expression, heart damage, proton irradiation, radiotherapy
Procedia PDF Downloads 489250 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution
Authors: Yasmin Begum Mohammed
Abstract:
Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique
Procedia PDF Downloads 295