Search results for: Peter Taylor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 635

Search results for: Peter Taylor

155 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 387
154 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 189
153 The Possible Double-Edged Sword Effects of Online Learning on Academic Performance: A Quantitative Study of Preclinical Medical Students

Authors: Atiwit Sinyoo, Sekh Thanprasertsuk, Sithiporn Agthong, Pasakorn Watanatada, Shaun Peter Qureshi, Saknan Bongsebandhu-Phubhakdi

Abstract:

Background: Since the SARS-CoV-2 virus became extensively disseminated throughout the world, online learning has become one of the most hotly debated topics in educational reform. While some studies have already shown the advantage of online learning, there are still questions concerning how online learning affects students’ learning behavior and academic achievement when each student learns in a different way. Hence, we aimed to develop a guide for preclinical medical students to avoid drawbacks and get benefits from online learning that possibly a double-edged sword. Methods: We used a multiple-choice questionnaire to evaluate the learning behavior of second-year Thai medical students in the neuroscience course. All traditional face-to-face lecture classes were video-recorded and promptly posted to the online learning platform throughout this course. Students could pick and choose whatever classes they wanted to attend, and they may use online learning as often as they wished. Academic performance was evaluated as summative score, spot exam score and pre-post-test improvement. Results: More frequently students used online learning platform, the less they attended lecture classes (P = 0.035). High proactive online learners (High PO) who were irregular attendee (IrA) had significantly lower summative scores (P = 0.026), spot exam score (P = 0.012) and pre-post-test improvement (P = 0.036). In the meanwhile, conditional attendees (CoA) who only attended classes with attendance check had significantly higher summative score (P = 0.025) and spot exam score (P = 0.001) if they were in the High PO group. Conclusions: The benefit and drawbacks edges of using an online learning platform were demonstrated in our research. Based on this double-edged sword effect, we believe that online learning is a valuable learning strategy, but students must carefully plan their study schedule to gain the “benefit edge” meanwhile avoiding its “drawback edge”.

Keywords: academic performance, assessment, attendance, online learning, preclinical medical students

Procedia PDF Downloads 139
152 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 82
151 Clinicians’ Experiences with IT Systems in a UK District General Hospital: A Qualitative Analysis

Authors: Sunny Deo, Eve Barnes, Peter Arnold-Smith

Abstract:

Introduction: Healthcare technology is a rapidly expanding field in healthcare, with enthusiasts suggesting a revolution in the quality and efficiency of healthcare delivery based on the utilisation of better e-healthcare, including the move to paperless healthcare. The role and use of computers and programmes for healthcare have been increasing over the past 50 years. Despite this, there is no standardised method of assessing the quality of hardware and software utilised by frontline healthcare workers. Methods and subjects: Based on standard Patient Related Outcome Measures, a questionnaire was devised with the aim of providing quantitative and qualitative data on clinicians’ perspectives of their hospital’s Information Technology (IT). The survey was distributed via the Institution’s Intranet to all contracted doctors, and the survey's qualitative results were analysed. Qualitative opinions were grouped as positive, neutral, or negative and further sub-grouped into speed/usability, software/hardware, integration, IT staffing, clinical risk, and wellbeing. Analysis was undertaken on the basis of doctor seniority and by specialty. Results: There were 196 responses, with 51% from senior doctors (consultant grades) and the rest from junior grades, with the largest group of respondents 52% coming from medicine specialties. Differences in the proportion of principle and sub-groups were noted by seniority and specialty. Negative themes were by far the commonest stated opinion type, occurring in almost 2/3’s of responses (63%), while positive comments occurred less than 1 in 10 (8%). Conclusions: This survey confirms strongly negative attitudes to the current state of electronic documentation and IT in a large single-centre cohort of hospital-based frontline physicians after two decades of so-called progress to a paperless healthcare system. Greater use would provide further insights and potentially optimise the focus of development and delivery to improve the quality and effectiveness of IT for clinicians and their patients.

Keywords: information technology, electronic patient records, digitisation, paperless healthcare

Procedia PDF Downloads 66
150 Implementation of A Treatment Escalation Plan During The Covid 19 Outbreak in Aneurin Bevan University Health Board

Authors: Peter Collett, Mike Pynn, Haseeb Ur Rahman

Abstract:

For the last few years across the UK there has been a push towards implementing treatment escalation plans (TEP) for every patient admitted to hospital. This is a paper form which is completed by a junior doctor then countersigned by the consultant responsible for the patient's care. It is designed to address what level of care is appropriate for the patient in question at point of entry to hospital. It helps decide whether the patient would benefit for ward based, high dependency or intensive care. They are completed to ensure the patient's best interests are maintained and aim to facilitate difficult decisions which may be required at a later date. For example, a frail patient with significant co-morbidities, unlikely to survive a pathology requiring an intensive care admission is admitted to hospital the decision can be made early to state the patient would not benefit from an ICU admission. This decision can be reversed depending on the clinical course of the patient's admission. It promotes discussions with the patient regarding their wishes to receive certain levels of healthcare. This poster describes the steps taken in the Aneurin Bevan University Health Board (ABUHB) when implementing the TEP form. The team implementing the TEP form campaigned for it's use to the board of directors. The directors were eager to hear of experiences of other health boards who had implemented the TEP form. The team presented the data produced in a number of health boards and demonstrated the proposed form. Concern was raised regarding the legalities of the form and that it could upset patients and relatives if the form was not explained properly. This delayed the effectuation of the TEP form and further research and discussion would be required. When COVID 19 reached the UK the National Institute for Health and Clinical Excellence issued guidance stating every patient admitted to hospital should be issued a TEP form. The TEP form was accelerated through the vetting process and was approved with immediate effect. The TEP form in ABUHB has now been in circulation for a month. An audit investigating it's uptake and a survey gathering opinions have been conducted.

Keywords: acute medicine, clinical governance, intensive care, patient centered decision making

Procedia PDF Downloads 156
149 Lateralisation of Visual Function in Yellow-Eyed Mullet (Aldrichetta forsteri) and Its Role in Schooling Behaviour

Authors: Karen L. Middlemiss, Denham G. Cook, Peter Jaksons, Alistair Jerrett, William Davison

Abstract:

Lateralisation of cognitive function is a common phenomenon found throughout the animal kingdom. Strong biases in functional behaviours have evolved from asymmetrical brain hemispheres which differ in structure and/or cognitive function. In fish, lateralisation is involved in visually mediated behaviours such as schooling, predator avoidance, and foraging, and is considered to have a direct impact on species fitness. Currently, there is very little literature on the role of lateralisation in fish schools. The yellow-eyed mullet (Aldrichetta forsteri), is an estuarine and coastal species found commonly throughout temperate regions of Australia and New Zealand. This study sought to quantify visually mediated behaviours in yellow-eyed mullet to identify the significance of lateralisation, and the factors which influence functional behaviours in schooling fish. Our approach to study design was to conduct a series of tank based experiments investigating; a) individual and population level lateralisation, b) schooling behaviour, and d) optic lobe anatomy. Yellow-eyed mullet showed individual variation in direction and strength of lateralisation in juveniles, and trait specific spatial positioning within the school was evidenced in strongly lateralised fish. In combination with observed differences in schooling behaviour, the possibility of ontogenetic plasticity in both behavioural lateralisation and optic lobe morphology in adults is suggested. These findings highlight the need for research into the genetic and environmental factors (epigenetics) which drive functional behaviours such as schooling, feeding and aggression. Improved knowledge on collective behaviour could have significant benefits to captive rearing programmes through improved culture techniques and will add to the limited body of knowledge on the complex ecophysiological interactions present in our inshore fisheries.

Keywords: cerebral asymmetry, fisheries, schooling, visual bias

Procedia PDF Downloads 196
148 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients

Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming

Abstract:

Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.

Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry

Procedia PDF Downloads 276
147 Self-Esteem and Emotional Intelligence’s Association to Nutritional Status in Adolescent Schoolchildren in Chile

Authors: Peter Mc Coll, Alberto Caro, Chiara Gandolfo, Montserrat Labbe, Francisca Schnaidt, Michela Palazzi

Abstract:

Self-esteem and emotional intelligence are variables that are related to people's nutritional status. Self-esteem may be at low levels in people living with obesity, while emotional intelligence can play an important role in the way people living with obesity cope. The objective of the study was to measure the association between self-esteem and emotional intelligence to nutritional status in adolescent population. Methodology: A cross-sectional study was carried out with 179 adolescent schoolchildren between 13 and 19 years old from a public school. The objective was to evaluate nutritional status; weight and height were measured by calculating the body mass index and Z score. Self-esteem was evaluated using the Coopersmith Self-esteem Inventory adapted by Brinkmann and Segure. Emotional intelligence was measured using the Emotional Quotient Inventory: short, by Bar On, adapted questionnaire, translated into Spanish by López Zafra. For statistical analysis: Pearson's Chi-square test, Pearson's correlation, and odd ratio calculation were used, with a p value at a significance level < 5%. Results: The study group was composed of 71% female and 29% male. The nutritional status was distributed as eutrophic 41.9%, overweight 20.1%, and obesity 21.1%. In relation to self-esteem, 44.1% presented low and very low levels, without differences by gender. Emotional intelligence was distributed: low 3.4%, medium 81%, and high 13.4% -no differences according to gender. The association between nutritional status (overweight and obesity) with low and very low self-esteem, an odds ratio of 2.5 (95% CI 1.12 – 5.59) was obtained with a p-value = 0.02. The correlation analysis between the intrapersonal sub-dimension emotional intelligence scores and the Z score of nutritional status presented a negative correlation of r = - 0.209 with a p-value < 0.005. The correlation between emotional intelligence subdimension stress management with Z score presented a positive correlation of r = 0.0161 with a p-value < 0.05. In conclusion, the group of adolescents studied had a high prevalence of overweight and obesity, a high prevalence of low self-esteem, and a high prevalence of average emotional intelligence. Overweight and obese adolescents were 2.5 times more likely to have low self-esteem. As overweight and obesity increase, self-esteem decreases, and the ability to manage stress increases.

Keywords: self-esteem, emotional intelligence, obesity, adolescent, nutritional status

Procedia PDF Downloads 41
146 Garnet-based Bilayer Hybrid Solid Electrolyte for High-Voltage Cathode Material Modified with Composite Interface Enabler on Lithium-Metal Batteries

Authors: Kumlachew Zelalem Walle, Chun-Chen Yang

Abstract:

Solid-state lithium metal batteries (SSLMBs) are considered promising candidates for next-generation energy storage devices due to their superior energy density and excellent safety. However, recent findings have shown that the formation of lithium (Li) dendrites in SSLMBs still exhibits a terrible growth ability, which makes the development of SSLMBs have to face the challenges posed by the Li dendrite problem. In this work, an inorganic/organic mixture coating material (g-C3N4/ZIF-8/PVDF) was used to modify the surface of lithium metal anode (LMA). Then the modified LMA (denoted as g-C₃N₄@Li) was assembled with lithium nafion (LiNf) coated commercial NCM811 (LiNf@NCM811) using a bilayer hybrid solid electrolyte (Bi-HSE) that incorporated 20 wt.% (vs. polymer) LiNf coated Li6.05Ga0.25La3Zr2O11.8F0.2 ([email protected]) filler faced to the positive electrode and the other layer with 80 wt.% (vs. polymer) filler content faced to the g-C₃N₄@Li. The garnet-type Li6.05Ga0.25La3Zr2O11.8F0.2 (LG0.25LZOF) solid electrolyte was prepared via co-precipitation reaction process from Taylor flow reactor and modified using lithium nafion (LiNf), a Li-ion conducting polymer. The Bi-HSE exhibited high ionic conductivity of 6.8  10–4 S cm–1 at room temperature, and a wide electrochemical window (0–5.0 V vs. Li/Li+). The coin cell was charged between 2.8 to 4.5 V at 0.2C and delivered an initial specific discharge capacity of 194.3 mAh g–1 and after 100 cycles it maintained 81.8% of its initial capacity at room temperature. The presence of a nano-sheet g-C3N4/ZIF-8/PVDF as a composite coating material on the LMA surface suppress the dendrite growth and enhance the compatibility as well as the interfacial contact between anode/electrolyte membrane. The g-C3N4@Li symmetrical cells incorporating this hybrid electrolyte possessed excellent interfacial stability over 1000 h at 0.1 mA cm–2 and a high critical current density (1 mA cm–2). Moreover, the in-situ formation of Li3N on the solid electrolyte interface (SEI) layer as depicted from the XPS result also improves the ionic conductivity and interface contact during the charge/discharge process. Therefore, these novel multi-layered fabrication strategies of hybrid/composite solid electrolyte membranes and modification of the LMA surface using mixed coating materials have potential applications in the preparation of highly safe high-voltage cathodes for SSLMBs.

Keywords: high-voltage cathodes, hybrid solid electrolytes, garnet, graphitic-carbon nitride (g-C3N4), ZIF-8 MOF

Procedia PDF Downloads 52
145 Documenting the 15th Century Prints with RTI

Authors: Peter Fornaro, Lothar Schmitt

Abstract:

The Digital Humanities Lab and the Institute of Art History at the University of Basel are collaborating in the SNSF research project ‘Digital Materiality’. Its goal is to develop and enhance existing methods for the digital reproduction of cultural heritage objects in order to support art historical research. One part of the project focuses on the visualization of a small eye-catching group of early prints that are noteworthy for their subtle reliefs and glossy surfaces. Additionally, this group of objects – known as ‘paste prints’ – is characterized by its fragile state of preservation. Because of the brittle substances that were used for their production, most paste prints are heavily damaged and thus very hard to examine. These specific material properties make a photographic reproduction extremely difficult. To obtain better results we are working with Reflectance Transformation Imaging (RTI), a computational photographic method that is already used in archaeological and cultural heritage research. This technique allows documenting how three-dimensional surfaces respond to changing lighting situations. Our first results show that RTI can capture the material properties of paste prints and their current state of preservation more accurately than conventional photographs, although there are limitations with glossy surfaces because the mathematical models that are included in RTI are kept simple in order to keep the software robust and easy to use. To improve the method, we are currently developing tools for a more detailed analysis and simulation of the reflectance behavior. An enhanced analytical model for the representation and visualization of gloss will increase the significance of digital representations of cultural heritage objects. For collaborative efforts, we are working on a web-based viewer application for RTI images based on WebGL in order to make acquired data accessible to a broader international research community. At the ICDH Conference, we would like to present unpublished results of our work and discuss the implications of our concept for art history, computational photography and heritage science.

Keywords: art history, computational photography, paste prints, reflectance transformation imaging

Procedia PDF Downloads 263
144 Automated, Short Cycle Production of Polymer Composite Applications with Special Regards to the Complexity and Recyclability of Composite Elements

Authors: Peter Pomlenyi, Orsolya Semperger, Gergely Hegedus

Abstract:

The purpose of the project is to develop a complex composite component with visible class ‘A’ surface. It is going to integrate more functions, including continuous fiber reinforcement, foam core, injection molded ribs, and metal inserts. Therefore we are going to produce recyclable structural composite part from thermoplastic polymer in serial production with short cycle time for automotive applications. Our design of the process line is determined by the principles of Industry 4.0. Accordingly, our goal is to map in details the properties of the final product including the mechanical properties in order to replace metal elements used in automotive industry, with special regard to the effect of each manufacturing process step on the afore mentioned properties. Period of the project is 3 years, which lasts from the 1st of December 2016 to the 30th November 2019. There are four consortium members in the R&D project evopro systems engineering Ltd., Department of Polymer Engineering of the Budapest University of Technology and Economics, Research Centre for Natural Sciences of Hungarian Academy of Sciences and eCon Engineering Ltd. One of the most important result that we can obtain short cycle time (up to 2-3 min) with in-situ polymerization method, which is an innovation in the field of thermoplastic composite production. Because of the mentioned method, our fully automated production line is able to manufacture complex thermoplastic composite parts and satisfies the short cycle time required by the automotive industry. In addition to the innovative technology, we are able to design, analyze complex composite parts with finite element method, and validate our results. We are continuously collecting all the information, knowledge and experience to improve our technology and obtain even more accurate results with respect to the quality and complexity of the composite parts, the cycle time of the production, and the design and analyzing method of the composite parts.

Keywords: T-RTM technology, composite, automotive, class A surface

Procedia PDF Downloads 128
143 The Effects of Physiological Stress on Global and Regional Repolarisation in the Human Heart in Vivo

Authors: May Khei Hu, Kevin Leong, Fu Siong Ng, Nicholas Peter

Abstract:

Introduction: Sympathetic stimulation has been recognised as a potent stimulus of arrhythmogenesis in various cardiac pathologies, possibly by augmenting dispersion of repolarisation. The effects of sympathetic stimulation in healthy subjects however remain unclear. It is, therefore, crucial to first establish the effects of physiological stress on dispersion of repolarisation in healthy subjects before understanding these effects in pathological cardiac conditions. We hypothesised that activation-recovery interval (ARI; which is a surrogate of action potential duration) and dispersion of repolarisation decrease on sympathetic stimulation. Methods: Eight patients aged 18-55 years with structurally normal hearts underwent head-up tilt test (HUTT) and exercise tolerance test (ETT) while wearing the electrocardiographic imaging (ECGi) vest. Patients later underwent CT scan and the epicardial potentials are reconstructed using the ECGi software. Activation and recovery times were determined from the acquired electrograms. ARI was calculated and later corrected using Bazett’s formula. Global and regional dispersion of repolarisation were determined from standard deviation of the corrected ARI (ARIc). One-way analysis of variance (ANOVA) and Wilcoxon test were used to evaluate statistical significance. Results: Global ARIc increased significantly [p<0.01] when patients were tilted upwards but decreased significantly after five minutes [p<0.01]. A subsequent post- hoc analysis revealed that the decrease in R-R was more substantial compared to the change in ARI, resulting in the observed increase in ARIc. Global ARIc decreased on peak exercise [p<0.01] but increased on recovery [p<0.01]. Global dispersion increased significantly on peak exercise [p<0.05] although there were no significant changes in regional dispersion. There were no significant changes in both global and regional dispersion during tilt. Conclusion: ARIc decreases upon sympathetic stimulation in healthy subjects. Global dispersion of repolarisation increases upon exercise although there were no changes in global or regional dispersion during orthostatic stress.

Keywords: dispersion of repolarisation, sympathetic stimulation, Head-up tilt test (HUTT), Exercise tolerance test (ETT), Electrocardiographic imaging (ECGi)

Procedia PDF Downloads 180
142 Investigation of Xanthomonas euvesicatoria on Seed Germination and Seed to Seedling Transmission in Tomato

Authors: H. Mayton, X. Yan, A. G. Taylor

Abstract:

Infested tomato seeds were used to investigate the influence of Xanthomonas euvesicatoria on germination and seed to seedling transmission in a controlled environment and greenhouse assays in an effort to develop effective seed treatments and characterize seed borne transmission of bacterial leaf spot of tomato. Bacterial leaf spot of tomato, caused by four distinct Xanthomonas species, X. euvesicatoria, X. gardneri, X. perforans, and X. vesicatoria, is a serious disease worldwide. In the United States, disease prevention is expensive for commercial growers in warm, humid regions of the country, and crop losses can be devastating. In this study, four different infested tomato seed lots were extracted from tomato fruits infected with bacterial leaf spot from a field in New York State in 2017 that had been inoculated with X. euvesicatoria. In addition, vacuum infiltration at 61 kilopascals for 1, 5, 10, and 15 minutes and seed soaking for 5, 10, 15, and 30 minutes with different bacterial concentrations were used to artificially infest seed in the laboratory. For controlled environment assays, infested tomato seeds from the field and laboratory were placed othe n moistened blue blotter in square plastic boxes (10 cm x 10 cm) and incubated at 20/30 ˚C with an 8/16 hour light cycle, respectively. Infested tomato seeds from the field and laboratory were also planted in small plastic trays in soil (peat-lite medium) and placed in the greenhouse with 24/18 ˚C day and night temperatures, respectively, with a 14-hour photoperiod. Seed germination was assessed after eight days in the laboratory and 14 days in the greenhouse. Polymerase chain reaction (PCR) using the hrpB7 primers (RST65 [5’- GTCGTCGTTACGGCAAGGTGGTG-3’] and RST69 [5’-TCGCCCAGCGTCATCAGGCCATC-3’]) was performed to confirm presence or absence of the bacterial pathogen in seed lots collected from the field and in germinating seedlings in all experiments. For infested seed lots from the field, germination was lowest (84%) in the seed lot with the highest level of bacterial infestation (55%) and ranged from 84-98%. No adverse effect on germination was observed from artificially infested seeds for any bacterial concentration and method of infiltration when compared to a non-infested control. Germination in laboratory assays for artificially infested seeds ranged from 82-100%. In controlled environment assays, 2.5 % were PCR positive for the pathogen, and in the greenhouse assays, no infected seedlings were detected. From these experiments, X. euvesicatoria does not appear to adversely influence germination. The lowest rate of germination from field collected seed may be due to contamination with multiple pathogens and saprophytic organisms as no effect of artificial bacterial seed infestation in the laboratory on germination was observed. No evidence of systemic movement from seed to seedling was observed in the greenhouse assays; however, in the controlled environment assays, some seedlings were PCR positive. Additional experiments are underway with green fluorescent protein-expressing isolates to further characterize seed to seedling transmission of the bacterial leaf spot pathogen in tomato.

Keywords: bacterial leaf spot, seed germination, tomato, Xanthomonas euvesicatoria

Procedia PDF Downloads 118
141 Extraction and Quantification of Triclosan in Wastewater Samples Using Molecularly Imprinted Membrane Adsorbent

Authors: Siyabonga Aubrey Mhlongo, Linda Lunga Sibali, Phumlane Selby Mdluli, Peter Papoh Ndibewu, Kholofelo Clifford Malematja

Abstract:

This paper reports on the successful extraction and quantification of an antibacterial and antifungal agent present in some consumer products (Triclosan: C₁₂H₇Cl₃O₂)generally found in wastewater or effluents using molecularly imprinted membrane adsorbent (MIMs) followed by quantification and removal on a high-performance liquid chromatography (HPLC). Triclosan is an antibacterial and antifungal agent present in some consumer products like toothpaste, soaps, detergents, toys, and surgical cleaning treatments. The MIMs was fabricated usingpolyvinylidene fluoride (PVDF) polymer with selective micro composite particles known as molecularly imprinted polymers (MIPs)via a phase inversion by immersion precipitation technique. This resulted in an improved hydrophilicity and mechanical behaviour of the membranes. Wastewater samples were collected from the Umbogintwini Industrial Complex (UIC) (south coast of Durban, KwaZulu-Natal in South Africa). central UIC effluent treatment plant and pre-treated before analysis. Experimental parameters such as sample size, contact time, stirring speed were optimised. The resultant MIMs had an adsorption efficiency of 97% of TCS with reference to NIMs and bare membrane, which had 92%, 88%, respectively. The analytical method utilized in this review had limits of detection (LoD) and limits of quantification (LoQ) of 0.22, 0.71µgL-1 in wastewater effluent, respectively. The percentage recovery for the effluent samples was 68%. The detection of TCS was monitored for 10 consecutive days, where optimum TCS traces detected in the treated wastewater was 55.0μg/L inday 9 of the monitored days, while the lowest detected was 6.0μg/L. As the concentrations of analytefound in effluent water samples were not so diverse, this study suggested that MIMs could be the best potential adsorbent for the development and continuous progress in membrane technologyand environmental sciences, lending its capability to desalination.

Keywords: molecularly imprinted membrane, triclosan, phase inversion, wastewater

Procedia PDF Downloads 105
140 Assessing the Mass Concentration of Microplastics and Nanoplastics in Wastewater Treatment Plants by Pyrolysis Gas Chromatography−Mass Spectrometry

Authors: Yanghui Xu, Qin Ou, Xintu Wang, Feng Hou, Peng Li, Jan Peter van der Hoek, Gang Liu

Abstract:

The level and removal of microplastics (MPs) in wastewater treatment plants (WWTPs) has been well evaluated by the particle number, while the mass concentration of MPs and especially nanoplastics (NPs) remains unclear. In this study, microfiltration, ultrafiltration and hydrogen peroxide digestion were used to extract MPs and NPs with different size ranges (0.01−1, 1−50, and 50−1000 μm) across the whole treatment schemes in two WWTPs. By identifying specific pyrolysis products, pyrolysis gas chromatography−mass spectrometry were used to quantify their mass concentrations of selected six types of polymers (i.e., polymethyl methacrylate (PMMA), polypropylene (PP), polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), and polyamide (PA)). The mass concentrations of total MPs and NPs decreased from 26.23 and 11.28 μg/L in the influent to 1.75 and 0.71 μg/L in the effluent, with removal rates of 93.3 and 93.7% in plants A and B, respectively. Among them, PP, PET and PE were the dominant polymer types in wastewater, while PMMA, PS and PA only accounted for a small part. The mass concentrations of NPs (0.01−1 μm) were much lower than those of MPs (>1 μm), accounting for 12.0−17.9 and 5.6− 19.5% of the total MPs and NPs, respectively. Notably, the removal efficiency differed with the polymer type and size range. The low-density MPs (e.g., PP and PE) had lower removal efficiency than high-density PET in both plants. Since particles with smaller size could pass the tertiary sand filter or membrane filter more easily, the removal efficiency of NPs was lower than that of MPs with larger particle size. Based on annual wastewater effluent discharge, it is estimated that about 0.321 and 0.052 tons of MPs and NPs were released into the river each year. Overall, this study investigated the mass concentration of MPs and NPs with a wide size range of 0.01−1000 μm in wastewater, which provided valuable information regarding the pollution level and distribution characteristics of MPs, especially NPs, in WWTPs. However, there are limitations and uncertainties in the current study, especially regarding the sample collection and MP/NP detection. The used plastic items (e.g., sampling buckets, ultrafiltration membranes, centrifugal tubes, and pipette tips) may introduce potential contamination. Additionally, the proposed method caused loss of MPs, especially NPs, which can lead to underestimation of MPs/NPs. Further studies are recommended to address these challenges about MPs/NPs in wastewater.

Keywords: microplastics, nanoplastics, mass concentration, WWTPs, Py-GC/MS

Procedia PDF Downloads 262
139 Assessment Literacy Levels of Mathematics Teachers to Implement Classroom Assessment in Ghanaian High Schools

Authors: Peter Akayuure

Abstract:

One key determinant of the quality of mathematics learning is the teacher’s ability to assess students adequately and effectively and make assessment an integral part of the instructional practices. If the mathematics teacher lacks the required literacy to perform classroom assessment roles, the true trajectory of learning success and attainment of curriculum expectations might be indeterminate. It is therefore important that educators and policymakers understand and seek ways to improve the literacy level of mathematics teachers to implement classroom assessments that would meet curriculum demands. This study employed a descriptive survey design to explore perceived levels of assessment literacy of mathematics teachers to implement classroom assessment with the school based assessment framework in Ghana. A 25-item classroom assessment inventory on teachers’ assessment scenarios was adopted, modified, and administered to a purposive sample of 48 mathematics teachers from eleven Senior High Schools. Seven other items were included to further collect data on their self-efficacy towards assessment literacy. Data were analyzed using descriptive and bivariate correlation statistics. The result shows that, on average, 48.6% of the mathematics teachers attained standard levels of assessment literacy. Specifically, 50.0% met standard one in choosing appropriate assessment methods, 68.3% reached standard two in developing appropriate assessment tasks, 36.6% reached standard three in administering, scoring, and interpreting assessment results, 58.3% reached standard four in making appropriate assessment decisions, 41.7% reached standard five in developing valid grading procedures, 45.8% reached standard six in communicating assessment results, and 36.2 % reached standard seven by identifying unethical, illegal and inappropriate use of assessment results. Participants rated their self-efficacy belief in performing assessments high, making the relationships between participants’ assessment literacy scores and self-efficacy scores weak and statistically insignificant. The study recommends that institutions training mathematics teachers or providing professional developments should accentuate assessment literacy development to ensure standard assessment practices and quality instruction in mathematics education at senior high schools.

Keywords: assessment literacy, mathematics teacher, senior high schools, Ghana

Procedia PDF Downloads 118
138 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 132
137 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes

Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang

Abstract:

The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.

Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations

Procedia PDF Downloads 226
136 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip

Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac

Abstract:

Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.

Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating

Procedia PDF Downloads 203
135 Opportunities and Challenges of Digital Diplomacy in the Public Diplomacy of the Islamic Republic of Iran

Authors: Somayeh Pashaee

Abstract:

The ever-increasing growth of the Internet and the development of information and communication technology have prompted the politicians of different countries to use virtual networks as an efficient tool for their foreign policy. The communication of governments and countries, even in the farthest places from each other, through electronic networks, has caused vast changes in the way of statecraft and governance. Importantly, in the meantime, diplomacy, which is always based on information and communication, has been affected by the new prevailing conditions and new technologies more than other areas and has faced greater changes. The emergence of virtual space and the formation of new communication tools in the field of public diplomacy has led to the redefinition of the framework of diplomacy and politics in the international arena and the appearance of a new aspect of diplomacy called digital diplomacy. Digital diplomacy is in the concept of changing relations from a face-to-face and traditional way to a non-face-to-face and new way, and its purpose is to solve foreign policy issues using virtual space. Digital diplomacy, by affecting diplomatic procedures and its change, explains the role of technology in the visualization and implementation of diplomacy in different ways. The purpose of this paper is to investigate the position of digital diplomacy in the public diplomacy of the Islamic Republic of Iran. The paper tries to answer these two questions in a descriptive-analytical way, considering the progress of communication and the role of virtual space in the service of diplomacy, what is the approach of the Islamic Republic of Iran towards digital diplomacy and the use of a new way of establishing foreign relations in public diplomacy? What capacities and damages are facing the country after the use of this type of new diplomacy? In this paper, various theoretical concepts in the field of public diplomacy and modern diplomacy, including Geoff Berridge, Charles Kegley, Hans Tuch and Ronald Peter Barston, as well as the theoretical framework of Marcus Holmes on digital diplomacy, will be used as a conceptual basis to support the analysis. As a result, in order to better achieve the political goals of the country, especially in foreign policy, the approach of the Islamic Republic of Iran to public diplomacy with a focus on digital diplomacy should be strengthened and revised. Today, only emphasizing on advancing diplomacy through traditional methods may weaken Iran's position in the public opinion level from other countries.

Keywords: digital diplomacy, public diplomacy, islamic republic of Iran, foreign policy, opportunities and challenges

Procedia PDF Downloads 96
134 Phosphate Use Efficiency in Plants: A GWAS Approach to Identify the Pathways Involved

Authors: Azizah M. Nahari, Peter Doerner

Abstract:

Phosphate (Pi) is one of the essential macronutrients in plant growth and development, and it plays a central role in metabolic processes in plants, particularly photosynthesis and respiration. Limitation of crop productivity by Pi is widespread and is likely to increase in the future. Applications of Pi fertilizers have improved soil Pi fertility and crop production; however, they have also caused environmental damage. Therefore, in order to reduce dependence on unsustainable Pi fertilizers, a better understanding of phosphate use efficiency (PUE) is required for engineering nutrient-efficient crop plants. Enhanced Pi efficiency can be achieved by improved productivity per unit Pi taken up. We aim to identify, by using association mapping, general features of the most important loci that contribute to increased PUE to allow us to delineate the physiological pathways involved in defining this trait in the model plant Arabidopsis. As PUE is in part determined by the efficiency of uptake, we designed a hydroponic system to avoid confounding effects due to differences in root system architecture leading to differences in Pi uptake. In this system, 18 parental lines and 217 lines of the MAGIC population (a Multiparent Advanced Generation Inter-Cross) grown in high and low Pi availability conditions. The results showed revealed a large variation of PUE in the parental lines, indicating that the MAGIC population was well suited to identify PUE loci and pathways. 2 of 18 parental lines had the highest PUE in low Pi while some lines responded strongly and increased PUE with increased Pi. Having examined the 217 MAGIC population, considerable variance in PUE was found. A general feature was the trend of most lines to exhibit higher PUE when grown in low Pi conditions. Association mapping is currently in progress, but initial observations indicate that a wide variety of physiological processes are involved in influencing PUE in Arabidopsis. The combination of hydroponic growth methods and genome-wide association mapping is a powerful tool to identify the physiological pathways underpinning complex quantitative traits in plants.

Keywords: hydroponic system growth, phosphate use efficiency (PUE), Genome-wide association mapping, MAGIC population

Procedia PDF Downloads 304
133 Autosomal Dominant Polycystic Kidney Patients May Be Predisposed to Various Cardiomyopathies

Authors: Fouad Chebib, Marie Hogan, Ziad El-Zoghby, Maria Irazabal, Sarah Senum, Christina Heyer, Charles Madsen, Emilie Cornec-Le Gall, Atta Behfar, Barbara Ehrlich, Peter Harris, Vicente Torres

Abstract:

Background: Mutations in PKD1 and PKD2, the genes encoding the proteins polycystin-1 (PC1) and polycystin-2 (PC2) cause autosomal dominant polycystic kidney disease (ADPKD). ADPKD is a systemic disease associated with several extrarenal manifestations. Animal models have suggested an important role for the polycystins in cardiovascular function. The aim of the current study is to evaluate the association of various cardiomyopathies in a large cohort of patients with ADPKD. Methods: Clinical data was retrieved from medical records for all patients with ADPKD and cardiomyopathies (n=159). Genetic analysis was performed on available DNA by direct sequencing. Results: Among the 58 patients included in this case series, 39 patients had idiopathic dilated cardiomyopathy (IDCM), 17 had hypertrophic obstructive cardiomyopathy (HOCM), and 2 had left ventricular noncompaction (LVNC). The mean age at cardiomyopathy diagnosis was 53.3, 59.9 and 53.5 years in IDCM, HOCM and LVNC patients respectively. The median left ventricular ejection fraction at initial diagnosis of IDCM was 25%. Average basal septal thickness was 19.9 mm in patients with HOCM. Genetic data was available in 19, 8 and 2 cases of IDCM, HOCM, and LVNC respectively. PKD1 mutations were detected in 47.4%, 62.5% and 100% of IDCM, HOCM and LVNC cases. PKD2 mutations were detected only in IDCM cases and were overrepresented (36.8%) relative to the expected frequency in ADPKD (~15%). The prevalence of IDCM, HOCM, and LVNC in our ADPKD clinical cohort was 1:17, 1:39 and 1:333 respectively. When compared to the general population, IDCM and HOCM was approximately 10-fold more prevalent in patients with ADPKD. Conclusions: In summary, we suggest that PKD1 or PKD2 mutations may predispose to idiopathic dilated or hypertrophic cardiomyopathy. There is a trend for patients with PKD2 mutations to develop the former and for patients with PKD1 mutations to develop the latter. Predisposition to various cardiomyopathies may be another extrarenal manifestation of ADPKD.

Keywords: autosomal dominant polycystic kidney (ADPKD), polycystic kidney disease, cardiovascular, cardiomyopathy, idiopathic dilated cardiomyopathy, hypertrophic cardiomyopathy, left ventricular noncompaction

Procedia PDF Downloads 294
132 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks

Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft

Abstract:

Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: autonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 373
131 Bioactive Substances-Loaded Water-in-Oil/Oil-in-Water Emulsions for Dietary Supplementation in the Elderly

Authors: Agnieszka Markowska-Radomska, Ewa Dluska

Abstract:

Maintaining a bioactive substances dense diet is important for the elderly, especially to prevent diseases and to support healthy ageing. Adequate bioactive substances intake can reduce the risk of developing chronic diseases (e.g. cardiovascular, osteoporosis, neurodegenerative syndromes, diseases of the oral cavity, gastrointestinal (GI) disorders, diabetes, and cancer). This can be achieved by introducing a comprehensive supplementation of components necessary for the proper functioning of the ageing body. The paper proposes the multiple emulsions of the W1/O/W2 (water-in-oil-in-water) type as carriers for effective co-encapsulation and co-delivery of bioactive substances in supplementation of the elderly. Multiple emulsions are complex structured systems ("drops in drops"). The functional structure of the W1/O/W2 emulsion enables (i) incorporation of one or more bioactive components (lipophilic and hydrophilic); (ii) enhancement of stability and bioavailability of encapsulated substances; (iii) prevention of interactions between substances, as well as with the external environment, delivery to a specific location; and (iv) release in a controlled manner. The multiple emulsions were prepared by a one-step method in the Couette-Taylor flow (CTF) contactor in a continuous manner. In general, a two-step emulsification process is used to obtain multiple emulsions. The paper contains a proposal of emulsion functionalization by introducing pH-responsive biopolymer—carboxymethylcellulose sodium salt (CMC-Na) to the external phase, which made it possible to achieve a release of components controlled by the pH of the gastrointestinal environment. The membrane phase of emulsions was soybean oil. The W1/O/W2 emulsions were evaluated for their characteristics (drops size/drop size distribution, volume packing fraction), encapsulation efficiency and stability during storage (to 30 days) at 4ºC and 25ºC. Also, the in vitro multi-substance co-release process were investigated in a simulated gastrointestinal environment (different pH and composition of release medium). Three groups of stable multiple emulsions were obtained: emulsions I with co-encapsulated vitamins B12, B6 and resveratrol; emulsions II with vitamin A and β-carotene; and emulsions III with vitamins C, E and D3. The substances were encapsulated in the appropriate emulsion phases depending on the solubility. For all emulsions, high encapsulation efficience (over 95%) and high volume packing fraction of internal droplets (0.54-0.76) were reached. In addition, due to the presence of a polymer (CMC-Na) with adhesive properties, high encapsulation stability during emulsions storage were achieved. The co-release study of encapsulated bioactive substances confirmed the possibility to modify the release profiles. It was found that the releasing process can be controlled through the composition, structure, physicochemical parameters of emulsions and pH of the release medium. The results showed that the obtained multiple emulsions might be used as potential liquid complex carriers for controlled/modified/site-specific co-delivery of bioactive substances in dietary supplementation in the elderly.

Keywords: bioactive substance co-release, co-encapsulation, elderly supplementation, multiple emulsion

Procedia PDF Downloads 182
130 The Impacts of the Sit-Stand Workplace Intervention on Cardiometabolic Risk

Authors: Rebecca M. Dagger, Katy Hadgraft, Matthew Teggart, Peter Angell

Abstract:

Background: There is a growing body of evidence that demonstrates the association between sedentary behaviour, cardiometabolic risk and all-cause mortality. Since full time working adults spend approximately 8 hours per day in the workplace, interventions to reduce sedentary behaviour at work may alleviate some of the negative health outcomes associated with sedentary behaviour. The aims of this pilot study were to assess the impacts of using a Sit-Stand workstation on markers of cardiometabolic health in a cohort of desk workers. Methods: Twenty eight participants were recruited and randomly assigned to a control (n=5 males, 9 females, mean age 37 years ± 9.4 years) or intervention group (n= 5 males, 9 females, mean age 42 years ± 12.7 years). All participants attended the labs on 2 occasion’s pre and post intervention, following baseline measurements the intervention participants had the Sit Stand Workstations (Ergotron, USA) installed for a 10 week intervention period. The Sit Stand workstations allow participants to stand or sit at their usual workstation and participants were encouraged to the use the desk in a standing position at regular intervals throughout the working day. Cardiometabolic risk markers assessed were body mass, body composition (using bio impedance analysis; Tanita, Tokyo), fasting blood Total Cholesterol (TC), lipid profiles (HDL-C, LDL-C, TC: HDL-C ratio), triglycerides and fasting glucose (Cholestech LDX), resting systolic and diastolic blood pressure and resting heart rate. ANCOVA controlling for baseline values was used to assess the group difference in changes in risk markers between pre and post intervention. Results: The 10 week intervention was associated with significant reductions in some cardiometabolic risk factors. There were significant group effects on change in body mass (F (1,25)=5.915, p<0.05), total body fat percentage (F(1,25)=12.615, p<0.01), total fat mass (F (1,25)=6.954, p<0.05), and systolic blood pressure (F (1,25)=5.012, p<0.05). There were no other significant group effects on changes in other cardiometabolic risk markers. Conclusion: This pilot study highlights the importance of reducing sedentary behaviour in the workplace for reduction in cardiometabolic risk markers. Further research is required to support these findings.

Keywords: sedentary behaviour, caridometabolic risk, evidence, risk makers

Procedia PDF Downloads 432
129 A Comparative Analysis of Innovation Maturity Models: Towards the Development of a Technology Management Maturity Model

Authors: Nikolett Deutsch, Éva Pintér, Péter Bagó, Miklós Hetényi

Abstract:

Strategic technology management has emerged and evolved parallelly with strategic management paradigms. It focuses on the opportunity for organizations operating mainly in technology-intensive industries to explore and exploit technological capabilities upon which competitive advantage can be obtained. As strategic technology management involves multifunction within an organization, requires broad and diversified knowledge, and must be developed and implemented with business objectives to enable a firm’s profitability and growth, excellence in strategic technology management provides unique opportunities for organizations in terms of building a successful future. Accordingly, a framework supporting the evaluation of the technological readiness level of management can significantly contribute to developing organizational competitiveness through a better understanding of strategic-level capabilities and deficiencies in operations. In the last decade, several innovation maturity assessment models have appeared and become designated management tools that can serve as references for future practical approaches expected to be used by corporate leaders, strategists, and technology managers to understand and manage technological capabilities and capacities. The aim of this paper is to provide a comprehensive review of the state-of-the-art innovation maturity frameworks, to investigate the critical lessons learned from their application, to identify the similarities and differences among the models, and identify the main aspects and elements valid for the field and critical functions of technology management. To this end, a systematic literature review was carried out considering the relevant papers and articles published in highly ranked international journals around the 27 most widely known innovation maturity models from four relevant digital sources. Key findings suggest that despite the diversity of the given models, there is still room for improvement regarding the common understanding of innovation typologies, the full coverage of innovation capabilities, and the generalist approach to the validation and practical applicability of the structure and content of the models. Furthermore, the paper proposes an initial structure by considering the maturity assessment of the technological capacities and capabilities - i.e., technology identification, technology selection, technology acquisition, technology exploitation, and technology protection - covered by strategic technology management.

Keywords: innovation capabilities, innovation maturity models, technology audit, technology management, technology management maturity models

Procedia PDF Downloads 42
128 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)

Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni

Abstract:

Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.

Keywords: numerical methods, finite difference method, heat transfer, Scilab

Procedia PDF Downloads 362
127 Systems Intelligence in Management (High Performing Organizations and People Score High in Systems Intelligence)

Authors: Raimo P. Hämäläinen, Juha Törmänen, Esa Saarinen

Abstract:

Systems thinking has been acknowledged as an important approach in the strategy and management literature ever since the seminal works of Ackhoff in the 1970´s and Senge in the 1990´s. The early literature was very much focused on structures and organizational dynamics. Understanding systems is important but making improvements also needs ways to understand human behavior in systems. Peter Senge´s book The Fifth Discipline gave the inspiration to the development of the concept of Systems Intelligence. The concept integrates the concepts of personal mastery and systems thinking. SI refers to intelligent behavior in the context of complex systems involving interaction and feedback. It is a competence related to the skills needed in strategy and the environment of modern industrial engineering and management where people skills and systems are in an increasingly important role. The eight factors of Systems Intelligence have been identified from extensive surveys and the factors relate to perceiving, attitude, thinking and acting. The personal self-evaluation test developed consists of 32 items which can also be applied in a peer evaluation mode. The concept and test extend to organizations too. One can talk about organizational systems intelligence. This paper reports the results of an extensive survey based on peer evaluation. The results show that systems intelligence correlates positively with professional performance. People in a managerial role score higher in SI than others. Age improves the SI score but there is no gender difference. Top organizations score higher in all SI factors than lower ranked ones. The SI-tests can also be used as leadership and management development tools helping self-reflection and learning. Finding ways of enhancing learning organizational development is important. Today gamification is a new promising approach. The items in the SI test have been used to develop an interactive card game following the Topaasia game approach. It is an easy way of engaging people in a process which both helps participants see and approach problems in their organization. It also helps individuals in identifying challenges in their own behavior and in improving in their SI.

Keywords: gamification, management competence, organizational learning, systems thinking

Procedia PDF Downloads 76
126 Sports Racism in Australia: A Fifty Year Study of Bigotry and the Culture of Silence, from Mexico City to Melbourne

Authors: Tasneem Chopra

Abstract:

The 1968 Summer Olympics will forever be remembered for the silent protest against racism exhibited by American athletes Tommy Smith and John Carlos. Also standing on the medal podium was Australian Peter Norman, whose silent solidarity as a white sportsman completes the powerful, evocative image of that night in Mexico City. In the 50 years since Norman’s stance of solidarity with his American counterparts, Australian sports has traveled a wide arc of racism narratives, with athletes still experiencing episodes of bigotry, both on the pitch and elsewhere. Aboriginal athletes, like tennis champion Yvonne Goolagong, have endured the plaudits of appreciation for their achievements on both the national and international stage, while simultaneously being subject to both prejudice and even questions as to their right to represent their country as full, acceptable citizens. Racism in Australia is directed toward Australian athletes of colour as well as foreign sportspeople who visit the country. The complex, mutating nature of racism in Australia is also informed by the culture of silence, where fellow athletes stand mute in light of their colleagues’ experience with bigotry. This paper analyses the phenomenon of sports racism in Australia over the past fifty years, culminating in the most recent showdown between Heretier Lumumba, former Collingwood football player, and his public allegations of racism experienced by team mates over his 10 year career. It shall examine the treatment and mistreatment of athletes because of their race and will further assess how such public perceptions both shape Australian culture or are themselves a manifestation of preexisting pathologies of bigotry. Further, it will examine the efficacy of anti-racism initiatives in responding to this hate. This paper will analyse the growing influence of corporate and media entities in crafting the economics of Australian sports and assess the role of such factors in creating the narrative of racism in the nation, both as a sociological reality as well as a marker of national identity. Finally, this paper will examine the political, social and economic forces that contribute to the culture of silence in Australian society in defying racism.

Keywords: aboriginal, Australia, corporations, silence

Procedia PDF Downloads 158