Search results for: weighted interval
170 Neuromyelitis Optica area Postrema Syndrome(NMOSD-APS) in a Fifteen-year-old Girl: A Case Report
Authors: Merilin Ivanova Ivanova, Kalin Dimitrov Atanasov, Stefan Petrov Enchev
Abstract:
Backgroud: Neuromyelitis optica spectrum disorder, also known as Devic’s disease, is a relapsing demyelinating autoimmune inflammatory disorder of the central nervous system associated with anti-aquaporin 4 (AQP4) antibodies that can manifest with devastating secondary neurological deficits. Most commonly affected are the optic nerves and the spinal cord-clinically this is often presented with optic neuritis (loss of vision), transverse myelitis(weakness or paralysis of extremities),lack of bladder and bowel control, numbness. APS is a core clinical entity of NMOSD and adds to the clinical representation the following symptoms: intractable nausea, vomiting and hiccup, it usually occurs isolated at onset, and can lead to a significant delay in the diagnosis. The condition may have features similar to multiple sclerosis (MS) but the episodes are worse in NMO and it is treated differently. It could be relapsing or monophasic. Possible complications are visual field defects and motor impairment, with potential blindness and irreversible motor deficits. In severe cases, myogenic respiratory failure ensues. The incidence of reported cases is approximately 0.3–4.4 per 100,000. Paediatric cases of NMOSD are rare but have been reported occasionally, comprising less than 5% of the reported cases. Objective: The case serves to show the difficulty when it comes to the diagnostic processes regarding a rare autoimmune disease with non- specific symptoms, taking large interval of rimes to reveal as complete clinical manifestation of the aforementioned syndrome, as well as the necessity of multidisciplinary approach in the setting of а general paediatric department in аn emergency hospital. Methods: itpatient's history, clinical presentation, and information from the used diagnostic tools(MRI with contrast of the central nervous system) lead us to the conclusion .This was later on confirmed by the positive results from the anti-aquaporin 4 (AQP4) antibody serology test. Conclusion: APS is a common symptom of NMOSD and is considered a challenge in a differential-diagnostic plan. Gaining an increased awareness of this disease/syndrome, obtaining a detailed patient history, and performing thorough physical examinations are essential if we are to reduce and avoid misdiagnosis.Keywords: neuromyelitis, devic's disease, hiccup, autoimmune, MRI
Procedia PDF Downloads 39169 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking
Authors: Noga Bregman
Abstract:
Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves
Procedia PDF Downloads 52168 Genetic Structure Analysis through Pedigree Information in a Closed Herd of the New Zealand White Rabbits
Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi
Abstract:
The New Zealand White breed of rabbit is one of the most commonly used, well adapted exotic breeds in India. Earlier studies were limited only to analyze the environmental factors affecting the growth and reproductive performance. In the present study, the population of the New Zealand White rabbits in a closed herd was evaluated for its genetic structure. Data on pedigree information (n=2508) for 18 years (1995-2012) were utilized for the study. Pedigree analysis and the estimates of population genetic parameters based on gene origin probabilities were performed using the software program ENDOG (version 4.8). The analysis revealed that the mean values of generation interval, coefficients of inbreeding and equivalent inbreeding were 1.489 years, 13.233 percent and 17.585 percent, respectively. The proportion of population inbred was 100 percent. The estimated mean values of average relatedness and the individual increase in inbreeding were 22.727 and 3.004 percent, respectively. The percent increase in inbreeding over generations was 1.94, 3.06 and 3.98 estimated through maximum generations, equivalent generations, and complete generations, respectively. The number of ancestors contributing the most of 50% genes (fₐ₅₀) to the gene pool of reference population was 4 which might have led to the reduction in genetic variability and increased amount of inbreeding. The extent of genetic bottleneck assessed by calculating the effective number of founders (fₑ) and the effective number of ancestors (fₐ), as expressed by the fₑ/fₐ ratio was 1.1 which is indicative of the absence of stringent bottlenecks. Up to 5th generation, 71.29 percent pedigree was complete reflecting the well-maintained pedigree records. The maximum known generations were 15 with an average of 7.9 and the average equivalent generations traced were 5.6 indicating of a fairly good depth in pedigree. The realized effective population size was 14.93 which is very critical, and with the increasing trend of inbreeding, the situation has been assessed to be worse in future. The proportion of animals with the genetic conservation index (GCI) greater than 9 was 39.10 percent which can be used as a scale to use such animals with higher GCI to maintain balanced contribution from the founders. From the study, it was evident that the herd was completely inbred with very high inbreeding coefficient and the effective population size was critical. Recommendations were made to reduce the probability of deleterious effects of inbreeding and to improve the genetic variability in the herd. The present study can help in carrying out similar studies to meet the demand for animal protein in developing countries.Keywords: effective population size, genetic structure, pedigree analysis, rabbit genetics
Procedia PDF Downloads 293167 Entrepreneurial Dynamism and Socio-Cultural Context
Authors: Shailaja Thakur
Abstract:
Managerial literature abounds with discussions on business strategies, success stories as well as cases of failure, which provide an indication of the parameters that should be considered in gauging the dynamism of an entrepreneur. Neoclassical economics has reduced entrepreneurship to a mere factor of production, driven solely by the profit motive, thus stripping him of all creativity and restricting his decision making to mechanical calculations. His ‘dynamism’ is gauged simply by the amount of profits he earns, marginalizing any discussion on the means that he employs to attain this objective. With theoretical backing, we have developed an Index of Entrepreneurial Dynamism (IED) giving weights to the different moves that the entrepreneur makes during his business journey. Strategies such as changes in product lines, markets and technology are gauged as very important (weighting of 4); while adaptations in terms of technology, raw materials used, upgradations in skill set are given a slightly lesser weight of 3. Use of formal market analysis, diversification in related products are considered moderately important (weight of 2) and being a first generation entrepreneur, employing managers and having plans to diversify are taken to be only slightly important business strategies (weight of 1). The maximum that an entrepreneur can score on this index is 53. A semi-structured questionnaire is employed to solicit the responses from the entrepreneurs on the various strategies that have been employed by them during the course of their business. Binary as well as graded responses are obtained, weighted and summed up to give the IED. This index was tested on about 150 tribal entrepreneurs in Mizoram, a state of India and was found to be highly effective in gauging their dynamism. This index has universal acceptability but is devoid of the socio-cultural context, which is very central to the success and performance of the entrepreneurs. We hypothesize that a society that respects risk taking takes failures in its stride, glorifies entrepreneurial role models, promotes merit and achievement is one that has a conducive socio- cultural environment for entrepreneurship. For obtaining an idea about the social acceptability, we are putting forth questions related to the social acceptability of business to another set of respondents from different walks of life- bureaucracy, academia, and other professional fields. Similar weighting technique is employed, and index is generated. This index is used for discounting the IED of the respondent entrepreneurs from that region/ society. This methodology is being tested for a sample of entrepreneurs from two very different socio- cultural milieus- a tribal society and a ‘mainstream’ society- with the hypothesis that the entrepreneurs in the tribal milieu might be showing a higher level of dynamism than their counterparts in other regions. An entrepreneur who scores high on IED and belongs to society and culture that holds entrepreneurship in high esteem, might not be in reality as dynamic as a person who shows similar dynamism in a relatively discouraging or even an outright hostile environment.Keywords: index of entrepreneurial dynamism, India, social acceptability, tribal entrepreneurs
Procedia PDF Downloads 258166 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data
Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda
Abstract:
Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection
Procedia PDF Downloads 129165 Technology Management for Early Stage Technologies
Authors: Ming Zhou, Taeho Park
Abstract:
Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.Keywords: technology management, early stage technology, patent, decision
Procedia PDF Downloads 342164 Multi-Scale Spatial Difference Analysis Based on Nighttime Lighting Data
Authors: Qinke Sun, Liang Zhou
Abstract:
The ‘Dragon-Elephant Debate’ between China and India is an important manifestation of global multipolarity in the 21st century. The two rising powers have carried out economic reforms one after another in the interval of more than ten years, becoming the fastest growing developing country and emerging economy in the world. At the same time, the development differences between China and India have gradually attracted wide attention of scholars. Based on the continuous annual night light data (DMSP-OLS) from 1992 to 2012, this paper systematically compares and analyses the regional development differences between China and India by Gini coefficient, coefficient of variation, comprehensive night light index (CNLI) and hot spot analysis. The results show that: (1) China's overall expansion from 1992 to 2012 is 1.84 times that of India, in which China's change is 2.6 times and India's change is 2 times. The percentage of lights in unlighted areas in China dropped from 92% to 82%, while that in India from 71% to 50%. (2) China's new growth-oriented cities appear in Hohhot, Inner Mongolia, Ordos, and Urumqi in the west, and the declining cities are concentrated in Liaoning Province and Jilin Province in the northeast; India's new growth-oriented cities are concentrated in Chhattisgarh in the north, while the declining areas are distributed in Uttar Pradesh. (3) China's differences on different scales are lower than India's, and regional inequality of development is gradually narrowing. Gini coefficients at the regional and provincial levels have decreased from 0.29, 0.44 to 0.24 and 0.38, respectively, while regional inequality in India has slowly improved and regional differences are gradually widening, with Gini coefficients rising from 0.28 to 0.32. The provincial Gini coefficient decreased slightly from 0.64 to 0.63. (4) The spatial pattern of China's regional development is mainly east-west difference, which shows the difference between coastal and inland areas; while the spatial pattern of India's regional development is mainly north-south difference, but because the southern states are sea-dependent, it also reflects the coastal inland difference to a certain extent. (5) Beijing and Shanghai present a multi-core outward expansion model, with an average annual CNLI higher than 0.01, while New Delhi and Mumbai present the main core enhancement expansion model, with an average annual CNLI lower than 0.01, of which the average annual CNLI in Shanghai is about five times that in Mumbai.Keywords: spatial pattern, spatial difference, DMSP-OLS, China, India
Procedia PDF Downloads 155163 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 114162 Working Conditions and Occupational Health: Analyzing the Stressing Factors in Outsourced Employees
Authors: Cledinaldo A. Dias, Isabela C. Santos, Marcus V. S. Siqueira
Abstract:
In the contemporary globalization, the competitiveness generated in the search of new markets aiming at the growth of productivity and, consequently, of profits, implies the redefinition of productive processes and new forms of work organization. As a result of this structuring, unemployment, labor force turnover and the increase in outsourcing and informal work occur. Considering the different relationships and working conditions of outsourced employees, this study aims to identify the most present stressors among outsourced service providers from a Federal Institution of Higher Education in Brazil. To reach this objective, a descriptive exploratory study with a quantitative approach was carried out. The qualitative approach was chosen to provide an in-depth analysis of the occupational conditions of outsourced workers since this method seeks to focus on the social as a world of investigated meanings and the language or speech of each subject as the object of this approach. The survey was conducted in the city of Montes Claros - Minas Gerais (Brazil) and involved eighty workers from companies hired by the institution, including armed security guards, porters, cleaners, drivers, gardeners, and administrative assistants. The choice of professionals obeyed non-probabilistic criteria for convenience or accessibility. Data collection was performed by means of a structured questionnaire composed of sixty questions, in a Likert-type frequency interval scale format, in order to identify potential organizational stressors. The results obtained evidence that the stress factors pointed out by the workers are, in most cases, a determining factor due to the low productive performance at work. Amongst the factors associated with stress, the ones that stood out most were those related to organizational communication failures, the incentive to competition, lack of expectations of professional growth, insecurity and job instability. Based on the results, the need for greater concern and organizational responsibility with the well-being and mental health of the outsourced worker and the recognition of their physical and psychological limitations, and care that goes beyond the functional capacity for the work. Specifically for the preservation of mental health, physical and quality of life, it is concluded that it is necessary for the professional to be inserted in the external world that favors it internally since this set is complemented so that the individual remains in balance and obtain satisfaction in your work.Keywords: occupational health, outsourced, organizational studies, stressors
Procedia PDF Downloads 104161 Magnitude of Transactional Sex and Its Determinant Factors Among Women in Sub-Saharan Africa: Systematic Review and Meat Analysis
Authors: Gedefaye Nibret Mihretie
Abstract:
Background: Transactional sex is casual sex between two people to receive material incentives in exchange for sexual favors. Transactional sex is associated with negative consequences, which increase the risk of sexually transmitted diseases, including HIV/AIDS, unintended pregnancy, unsafe abortion, and physiological trauma. Many primary studies in Sub-Saharan Africa have been conducted to assess the prevalence and associated factors of transactional sex among women. These studies had great discrepancies and inconsistent results. Hence, this systematic review and meta-analysis aimed to synthesize the pooled prevalence of the practice of transactional sex among women and its associated factors in Sub-Saharan Africa. Method: Cross-sectional studies were systematically searched from March 6, 2022, to April 24, 2022, using PubMed, Google Scholar, HINARI, Cochrane Library, and grey literature. The pooled prevalence of transactional sex and associated factors was estimated using DerSemonial-Laird Random Effect Model. Stata (version 16.0) was used to analyze the data. The I-squared statistic was used to assess the studies' heterogeneity. A funnel plot and Egger's test were used to check for publication bias. A subgroup analysis was performed to minimize the underline heterogeneity depending on the study years, source of data, sample sizes and geographical location. Results: Four thousand one hundred thirty articles were extracted from various databases. The final thirty-two studies were included in this systematic review, including 108,075 participants. The pooled prevalence of transactional sex among women in Sub-Saharan Africa was 12.55%, with a confidence interval of 9.59% to 15.52%. Educational status (OR = .48, 95%CI, 0.27, 0.69) was the protective factors of transactional sex whereas, alcohol use (OR = 1.85, 95% CI: 1.19, 2.52), early sex debut (OR = 2.57, 95%CI, 1.17, 3.98), substance abuse (OR = 4.21, 95% CI: 2.05, 6.37), having history of sexual experience abuse (OR = 4.08, 95% CI: 1.38, 6.78), physical violence abuse (OR = 6.59, 95% CI: 1.17, 12.02), and sexual violence abuse (OR = 3.56, 95% CI: 1.15, 8.27) were the risk factors of transactional sex. Conclusion: The prevalence of transactional sex among women in Sub-Saharan Africa was high. Educational status, alcohol use, substance abuse, early sex debut, having a history of sexual experiences, physical violence, and sexual violence were predictors of transaction sex. Governmental and other stakeholders are designed to reduce alcohol utilization, provide health information about the negative consequences of early sex debut, substance abuse, and reduce sexual violence, ensuring gender equality through mass media, which should be included in state policy.Keywords: women’s health, child health, reproductive health, midwifery
Procedia PDF Downloads 94160 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis
Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia
Abstract:
Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation
Procedia PDF Downloads 65159 A Failure to Strike a Balance: The Use of Parental Mediation Strategies by Foster Carers and Social Workers
Authors: Jennifer E Simpson
Abstract:
Background and purpose: The ubiquitous use of the Internet and social media by children and young people has had a dual effect. The first is to open a world of possibilities and promise that is characterized by the ability to consume and create content, connect with friends, explore and experiment. The second relates to risks such as unsolicited requests, sexual exploitation, cyberbullying and commercial exploitation. This duality poses significant difficulties for a generation of foster carers and social workers who have no childhood experience to draw on in terms of growing up using the Internet, social media and digital devices. This presentation is concerned with the findings of a small qualitative study about the use of digital devices and the Internet by care-experienced young people to stay in touch with their families and the way this was managed by foster carers and social workers using specific parental mediation strategies. The findings highlight that restrictive strategies were used by foster carers and endorsed by social workers. An argument is made for an approach that develops a series of balanced solutions that move foster carers from such restrictive approaches to those that are grounded in co-use and are interpretive in nature. Methods: Using a purposive sampling strategy, 12 triads consisting of care-experienced young people (aged 13-18 years), their foster carers and allocated social workers were recruited. All respondents undertook a semi-structured interview, with the young people detailing what social media apps and other devices they used to contact their families via an Ecomap. The foster carers and social workers shared details of the methods and approaches they used to manage digital devices and the Internet in general. Data analysis was performed using a Framework analytic method to explore the various attitudes, as well as complementary and contradictory perspectives of the young people, their foster carers and allocated social workers. Findings: The majority of foster carers made use of parental mediation strategies that erred on the side of typologies that included setting rules and regulations (restrictive), ad-hoc checking of a young person’s behavior and device (monitoring), and software used to limit or block access to inappropriate websites (technical). It was noted that minimal use was made by foster carers of parental mediation strategies that included talking about content (active/interpretive) or sharing Internet activities (co-use). Amongst the majority of the social workers, they also had a strong preference for restrictive approaches. Conclusions and implications: Trepidations on the part of both foster carers and social workers about the use of digital devices and the Internet meant that the parental strategies used were weighted more towards restriction, with little use made of approaches such as co-use and interpretative. This lack of balance calls for solutions that are grounded in co-use and an interpretive approach, both of which can be achieved through training and support, as well as wider policy change.Keywords: parental mediation strategies, risk, children in state care, online safety
Procedia PDF Downloads 73158 Meso-Scopic Structural Analysis of Chaura Thrust, Himachal Pradesh, India
Authors: Rajkumar Ghosh
Abstract:
Jhakri Thrust (JT) coeval of Sarahan Thrust (ST) was later considered to be part of Chaura Thrust (CT). The Main Central Thrust (MCT) delimits the southern extreme of Higher Himalaya, whereas the northern boundary defines by South Tibetan Detachment System (STDS). STDS is parallel set of north dipping extensional faults. The activation timing of MCT and STDS. MCT activated in two parts (MCT-L during 15- 0.7 Ma, and MCT-U during 25-14 Ma). Similarly, STDS triggered in two parts (STDS-L during 24-12 Ma, and STDS-U during 19-14 Ma). The activation ages for MBT and MFT. Besides, the MBT occurred during 11-9 Ma, and MFT followed as <2.5 Ma. There are two mylonitised zones (zone of S-C fabric) found under the microscope. Dynamic and bulging recrystallization and sub-grain formation was documented under the optical microscope from samples collected from these zones. The varieties of crenulated schistosity are shown in photomicrographs. In a rare and uncommon case, crenulation cleavage and sigmoid Muscovite were found together side-by-side. Recrystallized quartzo-feldspathic grains exist in between crenulation cleavages. These thin-section studies allow three possible hypotheses for such variations in crenulation cleavages. S/SE verging meso- and micro-scale box folds around Chaura might be a manifestation of some structural upliftment. Near Chaura, kink folds are visible. Prominent asymmetric shear sense indicators in augen mylonite are missing in meso-scale but dominantly present under the microscope. The main foliation became steepest (range of dip ~ 65 – 80 º) at this place. The aim of this section is to characterize the box fold and its signature in the regional geology of Himachal Himalaya. Grain Boundary Migration (GBM) associated temperature range (400–750 ºC) from microstructural studies in grain scale along Jhakri-Wangtu transect documented. Oriented samples were collected from the Jhakri-Chaura transect at a regular interval of ~ 1km for strain analysis. The Higher Himalayan Out-of-Sequence Thrust (OOST) in Himachal Pradesh is documented a decade ago. The OOST in other parts of the Himalayas is represented as a line in between MCTL and MCTU. But In Himachal Pradesh area, OOST activated the MCTL as well as in between a zone located south of MCTU. The expectations for strain variation near the OOST are very obvious. But multiple sets of OOSTs may produce a zigzag pattern of strain accumulation for this area and figure out the overprinting structures for multiple sets of OOSTs.Keywords: Chaura Thrust, out-of-sequence thrust, Main Central Thrust, Sarahan Thrust
Procedia PDF Downloads 78157 The Risk of Deaths from Viral Hepatitis among the Female Workers in the Beauty Service Industry
Authors: Byeongju Choi, Sanggil Lee, Kyung-Eun Lee
Abstract:
Introduction: In the republic of Korea, the number of workers in the beauty industry has been increasing. Because the prevalence of hepatitis B carriers in Korea is higher than in other countries, the risk of blood-borne infection including viral hepatitis B and C, among the workers by using the sharp and contaminated instruments during procedure can be expected among beauty salon workers. However, the health care policies for the workers to prevent the blood-borne infection are not established due to the lack of evidences. Moreover, the workers in hair and nail salon were mostly employed at small businesses, where national mandatory systems or policies for workers’ health management are not applied. In this study, the risk of the viral hepatitis B and C from the job experiencing the hair and nail procedures in the mortality was assessed. Method: We conducted a retrospective review of the job histories and causes of death in the female deaths from 2006-2016. 132,744 of female deaths who had one more job experiences during their lifetime were included in this study. Job histories were assessed using the employment insurance database in Korea Employment Information Service (KEIS) and the causes of death were in death statistics produced by Statistics Korea. Case group (n= 666) who died from viral hepatitis was classified the death having record involved in ‘B15-B19’ as a cause of deaths based on Korean Standard Classification of Diseases(KCD) with the deaths from other causes, control group (n=132,078). The group of the workers in the beauty service industry were defined as the employees who had ever worked in the industry coded as ‘9611’ based on Korea Standard Industry Classification (KSIC) and others were others. Other than job histories, birth year, marital status, education level were investigated from the death statistics. Multiple logistic regression analysis were used to assess the risk of deaths from viral hepatitis in the case and control group. Result: The number of the deaths having ever job experiences at the hair and nail salon was 255. After adjusting confounders of age, marital status and education, the odds ratio(OR) for deaths from viral hepatitis was quite high in the group having experiences with working in the beauty service industry with 3.14(95% confidence interval(CI) 1.00-9.87). Other associated factors with increasing the risk of deaths from viral hepatitis were low education level(OR=1.34, 95% CI 1.04-1.73), married women (OR=1.42, 95% CI 1.02-1.97). Conclusion: The risk of deaths from viral hepatitis were high in the workers in the beauty service industry but not statistically significant, which might attributed from the small number of workers in beauty service industry. It was likely that the number of workers in beauty service industry could be underestimated due to their temporary job position. Further studies evaluating the status and the incidence of viral infection among the workers with consideration of the vertical transmission would be required.Keywords: beauty service, viral hepatitis, blood-borne infection, viral infection
Procedia PDF Downloads 138156 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database
Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani
Abstract:
The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.Keywords: residual analysis, GMPE, western balkan, strong motion, openquake
Procedia PDF Downloads 88155 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm
Abstract:
Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension
Procedia PDF Downloads 100154 Effects of Acacia Honey Drink Ingestion during Rehydration after Exercise Compared to Sports Drink on Physiological Parameters and Subsequent Running Performance in the Heat
Authors: Foong Kiew Ooi, Aidi Naim Mohamad Samsani, Chee Keong Chen, Mohamed Saat Ismail
Abstract:
Introduction: Prolonged exercise in a hot and humid environment can result in glycogen depletion and associated with loss of body fluid. Carbohydrate contained in sports beverages is beneficial for improving sports performance and preventing dehydration. Carbohydrate contained in honey is believed can be served as an alternative form of carbohydrate for enhancing sports performance. Objective: To investigate the effectiveness of honey drink compared to sports drink as a recovery aid for running performance and physiological parameters in the heat. Method: Ten male recreational athletes (age: 22.2 ± 2.0 years, VO2max: 51.5 ± 3.7 ml.kg-1.min-1) participated in this randomized cross-over study. On each trial, participants were required to run for 1 hour in the glycogen depletion phase (Run-1), followed by a rehydration phase for 2 hours and subsequently a 20 minutes time trial performance (Run-2). During Run-1, subjects were required to run on the treadmill in the heat (31°C) with 70% relative humidity at 70 % of their VO2max. During rehydration phase, participants drank either honey drink or sports drink, or plain water with amount equivalent to 150% of body weight loss in dispersed interval (60 %, 50 % and 40 %) at 0 min, 30 min and 60 min respectively. Subsequently, time trial was performed by the participants in 20 minutes and the longest distance covered was recorded. Physiological parameters were analysed using two-way ANOVA with repeated measure and time trial performance was analysed using one-way ANOVA. Results: Result showed that Acacia honey elicited a better time trial performance with significantly longer distance compared to water trial (P<0.05). However, there was no significant difference between Acacia honey and sport drink trials (P > 0.05). Acacia honey and sports drink trials elicited 249 m (8.24 %) and 211 m (6.79 %) longer in distance compared to the water trial respectively. For physiological parameters, plasma glucose, plasma insulin and plasma free fatty acids in Acacia honey and sports drink trials were significantly higher compared to the water trial respectively during rehydration phase and time trial running performance phase. There were no significant differences in body weight changes, oxygen uptake, hematocrit, plasma volume changes and plasma cortisol in all the trials. Conclusion: Acacia honey elicited greatest beneficial effects on sports performance among the drinks, thus it has potential to be used for rehydration in athletes who train and compete in hot environment.Keywords: honey drink, rehydration, sports performance, plasma glucose, plasma insulin, plasma cortisol
Procedia PDF Downloads 309153 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems
Authors: Alexander J. Severinsky
Abstract:
Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air
Procedia PDF Downloads 104152 Assessment on the Conduct of Arnis Competition in Pasuc National Olympics 2015: Basis for Improvement of Rules in Competition
Authors: Paulo O. Motita
Abstract:
The Philippine Association of State Colleges and University (PASUC) is an association of State owned and operated higher learning institutions in the Philippines, it is the association that spearhead the conduct of the Annual National Athletic competitions for State Colleges and Universities and Arnis is one of the regular sports. In 2009, Republic Act 9850 also known as declared Arnis as the National Sports and Martial arts of the Philippines. Arnis an ancient Filipino Martial Arts is the major sports in the Annual Palarong Pambansa and other school based sports events. The researcher as a Filipino Martial Arts master and a former athlete desired to determine the extent of acceptability of the arnis rules in competition which serves as the basis for the development of arnis rules. The study aimed to assess the conduct of Arnis competition in PASUC Olympics 2015 in Tugegarao City, Cagayan, Philippines.the rules and conduct itself as perceived by Officiating officials, Coaches and Athletes during the competition last February 7-15, 2015. The descriptive method of research was used, the survey questionnaire as the data gathering instrument was validated. The respondents were composed of 12 Officiating officials, 19 coaches and 138 athletes representing the different regions. Their responses were treated using the Mean, Percentage and One-way Analysis of Variance. The study revealed that the conduct of Arnis competition in PASUC Olympics 2015 was at the low extent to moderate extent as perceived by the three groups of respondents in terms of officiating, scoring and giving violations. Furthermore there is no significant difference in the assessment of the three groups of respondents in the assessment of Anyo and Labanan. Considering the findings of the study, the following conclusions were drawn: 1). There is a need to identify the criteria for judging in Anyo and a tedious scrutiny on the rules of the game for labanan. 2) The three groups of respondents have similar views towards the assessment on the overall competitions for anyo that there were no clear technical guidelines for judging the performance of anyo event. 3). The three groups of respondents have similar views towards the assessment on the overall competitions for labanan that there were no clear technical guidelines for majority rule of giving scores in labanan. 4) The Anyo performance should be rated according to effectiveness of techniques and performance of weapon/s that are being used. 5) On other issues and concern towards the rules of competitions, labanan should be addressed in improving rules of competitions, focus on the applications of majority rules for scoring, players shall be given rest interval, a clear guidelines and set a standard qualifications for officiating officials.Keywords: PASUC Olympics 2015, Arnis rules of competition, Anyo, Labanan, officiating
Procedia PDF Downloads 458151 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework
Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua
Abstract:
There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.Keywords: economic growth, embodied, input-output, technology
Procedia PDF Downloads 124150 Monitoring the Effect of Doxorubicin Liposomal in VX2 Tumor Using Magnetic Resonance Imaging
Authors: Ren-Jy Ben, Jo-Chi Jao, Chiu-Ya Liao, Ya-Ru Tsai, Lain-Chyr Hwang, Po-Chou Chen
Abstract:
Cancer is still one of the serious diseases threatening the lives of human beings. How to have an early diagnosis and effective treatment for tumors is a very important issue. The animal carcinoma model can provide a simulation tool for the study of pathogenesis, biological characteristics and therapeutic effects. Recently, drug delivery systems have been rapidly developed to effectively improve the therapeutic effects. Liposome plays an increasingly important role in clinical diagnosis and therapy for delivering a pharmaceutic or contrast agent to the targeted sites. Liposome can be absorbed and excreted by the human body, and is well known that no harm to the human body. This study aimed to compare the therapeutic effects between encapsulated (doxorubicin liposomal, LipoDox) and un-encapsulated (doxorubicin, Dox) anti-tumor drugs using Magnetic Resonance Imaging (MRI). Twenty-four New Zealand rabbits implanted with VX2 carcinoma at left thigh were classified into three groups: control group (untreated), Dox-treated group and LipoDox-treated group, 8 rabbits for each group. MRI scans were performed three days after tumor implantation. A 1.5T GE Signa HDxt whole body MRI scanner with a high resolution knee coil was used in this study. After a 3-plane localizer scan was performed, Three-Dimensional (3D) Fast Spin Echo (FSE) T2-Weighted Images (T2WI) was used for tumor volumetric quantification. And Two-Dimensional (2D) spoiled gradient recalled echo (SPGR) dynamic Contrast-enhanced (DCE) MRI was used for tumor perfusion evaluation. DCE-MRI was designed to acquire four baseline images, followed by contrast agent Gd-DOTA injection through the ear vein of rabbits. Afterwards, a series of 32 images were acquired to observe the signals change over time in the tumor and muscle. The MRI scanning was scheduled on a weekly basis for a period of four weeks to observe the tumor progression longitudinally. The Dox and LipoDox treatments were prescribed 3 times in the first week immediately after VX2 tumor implantation. ImageJ was used to quantitate tumor volume and time course signal enhancement on DCE images. The changes of tumor size showed that the growth of VX2 tumors was effectively inhibited for both LipoDox-treated and Dox-treated groups. Furthermore, the tumor volume of LipoDox-treated group was significantly lower than that of Dox-treated group, which implies that LipoDox has better therapeutic effect than Dox. The signal intensity of LipoDox-treated group is significantly lower than that of the other two groups, which implies that targeted therapeutic drug remained in the tumor tissue. This study provides a radiation-free and non-invasive MRI method for therapeutic monitoring of targeted liposome on an animal tumor model.Keywords: doxorubicin, dynamic contrast-enhanced MRI, lipodox, magnetic resonance imaging, VX2 tumor model
Procedia PDF Downloads 457149 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling
Authors: S. Agarwal, A. Rani
Abstract:
Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters
Procedia PDF Downloads 125148 Investigation on Correlation of Earthquake Intensity Parameters with Seismic Response of Reinforced Concrete Structures
Authors: Semra Sirin Kiris
Abstract:
Nonlinear dynamic analysis is permitted to be used for structures without any restrictions. The important issue is the selection of the design earthquake to conduct the analyses since quite different response may be obtained using ground motion records at the same general area even resulting from the same earthquake. In seismic design codes, the method requires scaling earthquake records based on site response spectrum to a specified hazard level. Many researches have indicated that this limitation about selection can cause a large scatter in response and other charecteristics of ground motion obtained in different manner may demonstrate better correlation with peak seismic response. For this reason influence of eleven different ground motion parameters on the peak displacement of reinforced concrete systems is examined in this paper. From conducting 7020 nonlinear time history analyses for single degree of freedom systems, the most effective earthquake parameters are given for the range of the initial periods and strength ratios of the structures. In this study, a hysteresis model for reinforced concrete called Q-hyst is used not taken into account strength and stiffness degradation. The post-yielding to elastic stiffness ratio is considered as 0.15. The range of initial period, T is from 0.1s to 0.9s with 0.1s time interval and three different strength ratios for structures are used. The magnitude of 260 earthquake records selected is higher than earthquake magnitude, M=6. The earthquake parameters related to the energy content, duration or peak values of ground motion records are PGA(Peak Ground Acceleration), PGV (Peak Ground Velocity), PGD (Peak Ground Displacement), MIV (Maximum Increamental Velocity), EPA(Effective Peak Acceleration), EPV (Effective Peak Velocity), teff (Effective Duration), A95 (Arias Intensity-based Parameter), SPGA (Significant Peak Ground Acceleration), ID (Damage Factor) and Sa (Spectral Response Spectrum).Observing the correlation coefficients between the ground motion parameters and the peak displacement of structures, different earthquake parameters play role in peak displacement demand related to the ranges formed by the different periods and the strength ratio of a reinforced concrete systems. The influence of the Sa tends to decrease for the high values of strength ratio and T=0.3s-0.6s. The ID and PGD is not evaluated as a measure of earthquake effect since high correlation with displacement demand is not observed. The influence of the A95 is high for T=0.1 but low related to the higher values of T and strength ratio. The correlation of PGA, EPA and SPGA shows the highest correlation for T=0.1s but their effectiveness decreases with high T. Considering all range of structural parameters, the MIV is the most effective parameter.Keywords: earthquake parameters, earthquake resistant design, nonlinear analysis, reinforced concrete
Procedia PDF Downloads 151147 A Complex Network Approach to Structural Inequality of Educational Deprivation
Authors: Harvey Sanchez-Restrepo, Jorge Louca
Abstract:
Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics
Procedia PDF Downloads 124146 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction
Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini
Abstract:
Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable
Procedia PDF Downloads 280145 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens
Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott
Abstract:
In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF
Procedia PDF Downloads 172144 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model
Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi
Abstract:
Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.Keywords: flight control clearance, LFR, stability analysis, robustness analysis
Procedia PDF Downloads 352143 Combined Effect of Vesicular System and Iontophoresis on Skin Permeation Enhancement of an Analgesic Drug
Authors: Jigar N. Shah, Hiral J. Shah, Praful D. Bharadia
Abstract:
The major challenge faced by formulation scientists in transdermal drug delivery system is to overcome the inherent barriers related to skin permeation. The stratum corneum layer of the skin is working as the rate limiting step in transdermal transport and reduce drug permeation through skin. Many approaches have been used to enhance the penetration of drugs through this layer of the skin. The purpose of this study is to investigate the development and evaluation of a combined approach of drug carriers and iontophoresis as a vehicle to improve skin permeation of an analgesic drug. Iontophoresis is a non-invasive technique for transporting charged molecules into and through tissues by a mild electric field. It has been shown to effectively deliver a variety of drugs across the skin to the underlying tissue. In addition to the enhanced continuous transport, iontophoresis allows dose titration by adjusting the electric field, which makes personalized dosing feasible. Drug carrier could modify the physicochemical properties of the encapsulated molecule and offer a means to facilitate the percutaneous delivery of difficult-to-uptake substances. Recently, there are some reports about using liposomes, microemulsions and polymeric nanoparticles as vehicles for iontophoretic drug delivery. Niosomes, the nonionic surfactant-based vesicles that are essentially similar in properties to liposomes have been proposed as an alternative to liposomes. Niosomes are more stable and free from other shortcoming of liposomes. Recently, the transdermal delivery of certain drugs using niosomes has been envisaged and niosomes have proved to be superior transdermal nanocarriers. Proniosomes overcome some of the physical stability related problems of niosomes. The proniosomal structure was liquid crystalline-compact niosomes hybrid which could be converted into niosomes upon hydration. The combined use of drug carriers and iontophoresis could offer many additional benefits. The system was evaluated for Encapsulation Efficiency, vesicle size, zeta potential, Transmission Electron Microscopy (TEM), DSC, in-vitro release, ex-vivo permeation across skin and rate of hydration. The use of proniosomal gel as a vehicle for the transdermal iontophoretic delivery was evaluated in-vitro. The characteristics of the applied electric current, such as density, type, frequency, and on/off interval ratio were observed. The study confirms the synergistic effect of proniosomes and iontophoresis in improving the transdermal permeation profile of selected analgesic drug. It is concluded that proniosomal gel can be used as a vehicle for transdermal iontophoretic drug delivery under suitable electric conditions.Keywords: iontophoresis, niosomes, permeation enhancement, transdermal delivery
Procedia PDF Downloads 379142 Carbon Sequestration in Spatio-Temporal Vegetation Dynamics
Authors: Nothando Gwazani, K. R. Marembo
Abstract:
An increase in the atmospheric concentration of carbon dioxide (CO₂) from fossil fuel and land use change necessitates identification of strategies for mitigating threats associated with global warming. Oceans are insufficient to offset the accelerating rate of carbon emission. However, the challenges of oceans as a source of reducing carbon footprint can be effectively overcome by the storage of carbon in terrestrial carbon sinks. The gases with special optical properties that are responsible for climate warming include carbon dioxide (CO₂), water vapors, methane (CH₄), nitrous oxide (N₂O), nitrogen oxides (NOₓ), stratospheric ozone (O₃), carbon monoxide (CO) and chlorofluorocarbons (CFC’s). Amongst these, CO₂ plays a crucial role as it contributes to 50% of the total greenhouse effect and has been linked to climate change. Because plants act as carbon sinks, interest in terrestrial carbon sequestration has increased in an effort to explore opportunities for climate change mitigation. Removal of carbon from the atmosphere is a topical issue that addresses one important aspect of an overall strategy for carbon management namely to help mitigate the increasing emissions of CO₂. Thus, terrestrial ecosystems have gained importance for their potential to sequester carbon and reduce carbon sink in oceans, which have a substantial impact on the ocean species. Field data and electromagnetic spectrum bands were analyzed using ArcGIS 10.2, QGIS 2.8 and ERDAS IMAGINE 2015 to examine the vegetation distribution. Satellite remote sensing data coupled with Normalized Difference Vegetation Index (NDVI) was employed to assess future potential changes in vegetation distributions in Eastern Cape Province of South Africa. The observed 5-year interval analysis examines the amount of carbon absorbed using vegetation distribution. In 2015, the numerical results showed low vegetation distribution, therefore increased the acidity of the oceans and gravely affected fish species and corals. The outcomes suggest that the study area could be effectively utilized for carbon sequestration so as to mitigate ocean acidification. The vegetation changes measured through this investigation suggest an environmental shift and reduced vegetation carbon sink, and that threatens biodiversity and ecosystem. In order to sustain the amount of carbon in the terrestrial ecosystems, the identified ecological factors should be enhanced through the application of good land and forest management practices. This will increase the carbon stock of terrestrial ecosystems thereby reducing direct loss to the atmosphere.Keywords: remote sensing, vegetation dynamics, carbon sequestration, terrestrial carbon sink
Procedia PDF Downloads 151141 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 141