Search results for: Fully Homomorphic Encryption Scheme
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3259

Search results for: Fully Homomorphic Encryption Scheme

289 Outcomes-Based Qualification Design and Vocational Subject Literacies: How Compositional Fallacy Short-Changes School-Leavers’ Literacy Development

Authors: Rose Veitch

Abstract:

Learning outcomes-based qualifications have been heralded as the means to raise vocational education and training (VET) standards, meet the needs of the changing workforce, and establish equivalence with existing academic qualifications. Characterized by explicit, measurable performance statements and atomistically specified assessment criteria, the outcomes model has been adopted by many VET systems worldwide since its inception in the United Kingdom in the 1980s. Debate to date centers on how the outcomes model treats knowledge. Flaws have been identified in terms of the overemphasis of end-points, neglect of process and a failure to treat curricula coherently. However, much of this censure has evaluated the outcomes model from a theoretical perspective; to date, there has been scant empirical research to support these criticisms. Various issues therefore remain unaddressed. This study investigates how the outcomes model impacts the teaching of subject literacies. This is of particular concern for subjects on the academic-vocational boundary such as Business Studies, since many of these students progress to higher education in the United Kingdom. This study also explores the extent to which the outcomes model is compatible with borderline vocational subjects. To fully understand if this qualification model is fit for purpose in the 16-18 year-old phase, it is necessary to investigate how teachers interpret their qualification specifications in terms of curriculum, pedagogy and assessment. Of particular concern is the nature of the interaction between the outcomes model and teachers’ understandings of their subject-procedural knowledge, and how this affects their capacity to embed literacy into their teaching. This present study is part of a broader doctoral research project which seeks to understand if and how content-area, disciplinary literacy and genre approaches can be adapted to outcomes-based VET qualifications. This qualitative research investigates the ‘what’ and ‘how’ of literacy embedding from the perspective of in-service teacher development in the 16-18 phase of education. Using ethnographic approaches, it is based on fieldwork carried out in one Further Education college in the United Kingdom. Emergent findings suggest that the outcomes model is not fit for purpose in the context of borderline vocational subjects. It is argued that the outcomes model produces inferior qualifications due to compositional fallacy; the sum of a subject’s components do not add up to the whole. Findings indicate that procedural knowledge, largely unspecified by some outcomes-based qualifications, is where subject-literacies are situated, and that this often gets lost in ‘delivery’. It seems that the outcomes model provokes an atomistic treatment of knowledge amongst teachers, along with the privileging of propositional knowledge over procedural knowledge. In other words, outcomes-based VET is a hostile environment for subject-literacy embedding. It is hoped that this research will produce useful suggestions for how this problem can be ameliorated, and will provide an empirical basis for the potential reforms required to address these issues in vocational education.

Keywords: literacy, outcomes-based, qualification design, vocational education

Procedia PDF Downloads 15
288 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year

Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans

Abstract:

Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.

Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality

Procedia PDF Downloads 310
287 Global Digital Peer-to-Peer (P2P) Lending Platform Empowering Rural India: Determinants of Funding

Authors: Ankur Mehra, M. V. Shivaani

Abstract:

With increasing digitization, the world is coming closer, not only in terms of informational flow but also in terms of capital flows. And micro-finance institutions (MFIs) have perfectly leveraged this digital world by resorting to the innovative digital social peer-to-peer (P2P) lending platforms, such as, Kiva. These digital P2P platforms bring together micro-borrowers and lenders from across the world. The main objective of this study is to understand the funding preferences of social investors primarily from developed countries (such as US, UK, Australia), lending money to borrowers from rural India at zero interest rates through Kiva. Further, the objective of this study is to increase awareness about such a platform among various MFIs engaged in providing micro-loans to those in need. The sample comprises of India based micro-loan applications posted by various MFIs on Kiva lending platform over the period Sept 2012-March 2016. Out of 7,359 loans, 256 loans failed to get funded by social investors. On an average a micro-loan with 30 days to expiry gets fully funded in 7,593 minutes or 5.27 days. 62% of the loans raised on Kiva are related to livelihood, 32.5% of the loans are for funding basic necessities and balance 5.5% loans are for funding education. 47% of the loan applications have more than one borrower; while, currency exchange risk is on the social lenders for 45% of the loans. Controlling for the loan amount and loan tenure, the analyses suggest that those loan applications where the number of borrowers is more than one have a lower chance of getting funded as compared to the loan applications made by a sole borrower. Such group applications also take more time to get funded. Further, loan application by a solo woman not only has a higher chance of getting funded but as such get funded faster. The results also suggest that those loan applications which are supported by an MFI that has a religious affiliation, not only have a lower chance of getting funded, but also take longer to get funded as compared to the loan applications posted by secular MFIs. The results do not support cross-border currency risk to be a factor in explaining the determinants of loan funding. Finally, analyses suggest that loans raised for the purpose of earning livelihood and education have a higher chance of getting funded and such loans get funded faster as compared to the loans applied for purposes related to basic necessities such a clothing, housing, food, health, and personal use. The results are robust to controls for ‘MFI dummy’ and ‘year dummy’. The key implication from this study is that global social investors tend to develop an emotional connect with single woman borrowers and consequently they get funded faster Hence, MFIs should look for alternative ways for funding loans whose purpose is to meet basic needs; while, more loans related to livelihood and education should be raised via digital platforms.

Keywords: P2P lending, social investing, fintech, financial inclusion

Procedia PDF Downloads 144
286 Carbon Capture and Storage by Continuous Production of CO₂ Hydrates Using a Network Mixing Technology

Authors: João Costa, Francisco Albuquerque, Ricardo J. Santos, Madalena M. Dias, José Carlos B. Lopes, Marcelo Costa

Abstract:

Nowadays, it is well recognized that carbon dioxide emissions, together with other greenhouse gases, are responsible for the dramatic climate changes that have been occurring over the past decades. Gas hydrates are currently seen as a promising and disruptive set of materials that can be used as a basis for developing new technologies for CO₂ capture and storage. Its potential as a clean and safe pathway for CCS is tremendous since it requires only water and gas to be mixed under favorable temperatures and mild high pressures. However, the hydrates formation process is highly exothermic; it releases about 2 MJ per kilogram of CO₂, and it only occurs in a narrow window of operational temperatures (0 - 10 °C) and pressures (15 to 40 bar). Efficient continuous hydrate production at a specific temperature range necessitates high heat transfer rates in mixing processes. Past technologies often struggled to meet this requirement, resulting in low productivity or extended mixing/contact times due to inadequate heat transfer rates, which consistently posed a limitation. Consequently, there is a need for more effective continuous hydrate production technologies in industrial applications. In this work, a network mixing continuous production technology has been shown to be viable for producing CO₂ hydrates. The structured mixer used throughout this work consists of a network of unit cells comprising mixing chambers interconnected by transport channels. These mixing features result in enhanced heat and mass transfer rates and high interfacial surface area. The mixer capacity emerges from the fact that, under proper hydrodynamic conditions, the flow inside the mixing chambers becomes fully chaotic and self-sustained oscillatory flow, inducing intense local laminar mixing. The device presents specific heat transfer rates ranging from 107 to 108 W⋅m⁻³⋅K⁻¹. A laboratory scale pilot installation was built using a device capable of continuously capturing 1 kg⋅h⁻¹ of CO₂, in an aqueous slurry of up to 20% in mass. The strong mixing intensity has proven to be sufficient to enhance dissolution and initiate hydrate crystallization without the need for external seeding mechanisms and to achieve, at the device outlet, conversions of 99% in CO₂. CO₂ dissolution experiments revealed that the overall liquid mass transfer coefficient is orders of magnitude larger than in similar devices with the same purpose, ranging from 1 000 to 12 000 h⁻¹. The present technology has shown itself to be capable of continuously producing CO₂ hydrates. Furthermore, the modular characteristics of the technology, where scalability is straightforward, underline the potential development of a modular hydrate-based CO₂ capture process for large-scale applications.

Keywords: network, mixing, hydrates, continuous process, carbon dioxide

Procedia PDF Downloads 52
285 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 90
284 Bioresorbable Medicament-Eluting Grommet Tube for Otitis Media with Effusion

Authors: Chee Wee Gan, Anthony Herr Cheun Ng, Yee Shan Wong, Subbu Venkatraman, Lynne Hsueh Yee Lim

Abstract:

Otitis media with effusion (OME) is the leading cause of hearing loss in children worldwide. Surgery to insert grommet tube into the eardrum is usually indicated for OME unresponsive to antimicrobial therapy. It is the most common surgery for children. However, current commercially available grommet tubes are non-bioresorbable, not drug-treated, with unpredictable duration of retention on the eardrum to ventilate middle ear. Their functionality is impaired when clogged or chronically infected, requiring additional surgery to remove/reinsert grommet tubes. We envisaged that a novel fully bioresorbable grommet tube with sustained antibiotic release technology could address these drawbacks. In this study, drug-loaded bioresorbable poly(L-lactide-co-ε-caprolactone)(PLC) copolymer grommet tubes were fabricated by microinjection moulding technique. In vitro drug release and degradation model of PLC tubes were studied. Antibacterial property was evaluated by incubating PLC tubes with P. aeruginosa broth. Surface morphology was analyzed using scanning electron microscopy. A preliminary animal study was conducted using guinea pigs as an in vivo model to evaluate PLC tubes with and without drug, with commercial Mini Shah grommet tube as comparison. Our in vitro data showed sustained drug release over 3 months. All PLC tubes revealed exponential degradation profiles over time. Modeling predicted loss of tube functionality in water to be approximately 14 weeks and 17 weeks for PLC with and without drug, respectively. Generally, PLC tubes had less bacteria adherence, which were attributed to the much smoother tube surfaces compared to Mini Shah. Antibiotic from PLC tube further made bacteria adherence on surface negligible. They showed neither inflammation nor otorrhea after 18 weeks post-insertion in the eardrums of guinea pigs, but had demonstrated severe degree of bioresorption. Histology confirmed the new PLC tubes were biocompatible. Analyses on the PLC tubes in the eardrums showed bioresorption profiles close to our in vitro degradation models. The bioresorbable antibiotic-loaded grommet tubes showed good predictability in functionality. The smooth surface and sustained release technology reduced the risk of tube infection. Tube functional duration of 18 weeks allowed sufficient ventilation period to treat OME. Our ongoing studies include modifying the surface properties with protein coating, optimizing the drug dosage in the tubes to enhance their performances, evaluating their functional outcome on hearing after full resoption of grommet tube and healing of eardrums, and developing animal model with OME to further validate our in vitro models.

Keywords: bioresorbable polymer, drug release, grommet tube, guinea pigs, otitis media with effusion

Procedia PDF Downloads 451
283 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces

Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur

Abstract:

In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.

Keywords: aerodynamic, bi-dimensional, vegetation, synergistic

Procedia PDF Downloads 271
282 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 71
281 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism

Authors: Lubos Rojka

Abstract:

The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.

Keywords: consciousness, free will, determinism, emergence, moral responsibility

Procedia PDF Downloads 166
280 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients

Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado

Abstract:

Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.

Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off

Procedia PDF Downloads 192
279 Leveraging Remote Assessments and Central Raters to Optimize Data Quality in Rare Neurodevelopmental Disorders Clinical Trials

Authors: Pamela Ventola, Laurel Bales, Sara Florczyk

Abstract:

Background: Fully remote or hybrid administration of clinical outcome measures in rare neurodevelopmental disorders trials is increasing due to the ongoing pandemic and recognition that remote assessments reduce the burden on families. Many assessments in rare neurodevelopmental disorders trials are complex; however, remote/hybrid trials readily allow for the use of centralized raters to administer and score the scales. The use of centralized raters has many benefits, including reducing site burden; however, a specific impact on data quality has not yet been determined. Purpose: The current study has two aims: a) evaluate differences in data quality between administration of a standardized clinical interview completed by centralized raters compared to those completed by site raters and b) evaluate improvement in accuracy of scoring standardized developmental assessments when scored centrally compared to when scored by site raters. Methods: For aim 1, the Vineland-3, a widely used measure of adaptive functioning, was administered by site raters (n= 52) participating in one of four rare disease trials. The measure was also administered as part of two additional trials that utilized central raters (n=7). Each rater completed a comprehensive training program on the assessment. Following completion of the training, each clinician completed a Vineland-3 with a mock caregiver. Administrations were recorded and reviewed by a neuropsychologist for administration and scoring accuracy. Raters were able to certify for the trials after demonstrating an accurate administration of the scale. For site raters, 25% of each rater’s in-study administrations were reviewed by a neuropsychologist for accuracy of administration and scoring. For central raters, the first two administrations and every 10th administration were reviewed. Aim 2 evaluated the added benefit of centralized scoring on the accuracy of scoring of the Bayley-3, a comprehensive developmental assessment widely used in rare neurodevelopmental disorders trials. Bayley-3 administrations across four rare disease trials were centrally scored. For all administrations, the site rater who administered the Bayley-3 scored the scale, and a centralized rater reviewed the video recordings of the administrations and also scored the scales to confirm accuracy. Results: For aim 1, site raters completed 138 Vineland-3 administrations. Of the138 administrations, 53 administrations were reviewed by a neuropsychologist. Four of the administrations had errors that compromised the validity of the assessment. The central raters completed 180 Vineland-3 administrations, 38 administrations were reviewed, and none had significant errors. For aim 2, 68 administrations of the Bayley-3 were reviewed and scored by both a site rater and a centralized rater. Of these administrations, 25 had errors in scoring that were corrected by the central rater. Conclusion: In rare neurodevelopmental disorders trials, sample sizes are often small, so data quality is critical. The use of central raters inherently decreases site burden, but it also decreases rater variance, as illustrated by the small team of central raters (n=7) needed to conduct all of the assessments (n=180) in these trials compared to the number of site raters (n=53) required for even fewer assessments (n=138). In addition, the use of central raters dramatically improves the quality of scoring the assessments.

Keywords: neurodevelopmental disorders, clinical trials, rare disease, central raters, remote trials, decentralized trials

Procedia PDF Downloads 174
278 Enhancing Industrial Wastewater Treatment: Efficacy and Optimization of Ultrasound-Assisted Laccase Immobilized on Magnetic Fe₃O₄ Nanoparticles

Authors: K. Verma, v. S. Moholkar

Abstract:

In developed countries, water pollution caused by industrial discharge has emerged as a significant environmental concern over the past decades. However, despite ongoing efforts, a fully effective and sustainable remediation strategy has yet to be identified. This paper describes how enzymatic and sonochemical treatments have demonstrated great promise in degrading bio-refractory pollutants. Mainly, a compelling area of interest lies in the combined technique of sono-enzymatic treatment, which has exhibited a synergistic enhancement effect surpassing that of the individual techniques. This study employed the covalent attachment method to immobilize Laccase from Trametes versicolor onto amino-functionalized magnetic Fe₃O₄ nanoparticles. To comprehensively characterize the synthesized free nanoparticles and the laccase-immobilized nanoparticles, various techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and surface area through Brunauer-Emmett-Teller (BET) were employed. The size of immobilized Fe₃O₄@Laccase was found to be 60 nm, and the maximum loading of laccase was found to be 24 mg/g of nanoparticle. An investigation was conducted to study the effect of various process parameters, such as immobilized Fe₃O₄ Laccase dose, temperature, and pH, on the % Chemical oxygen demand (COD) removal as a response. The statistical design pinpointed the optimum conditions (immobilized Fe₃O₄ Laccase dose = 1.46 g/L, pH = 4.5, and temperature = 66 oC), resulting in a remarkable 65.58% COD removal within 60 minutes. An even more significant improvement (90.31% COD removal) was achieved with ultrasound-assisted enzymatic reaction utilizing a 10% duty cycle. The investigation of various kinetic models for free and immobilized laccase, such as the Haldane, Yano, and Koga, and Michaelis-Menten, showed that ultrasound application impacted the kinetic parameters Vmax and Km. Specifically, Vmax values for free and immobilized laccase were found to be 0.021 mg/L min and 0.045 mg/L min, respectively, while Km values were 147.2 mg/L for free laccase and 136.46 mg/L for immobilized laccase. The lower Km and higher Vmax for immobilized laccase indicate its enhanced affinity towards the substrate, likely due to ultrasound-induced alterations in the enzyme's confirmation and increased exposure of active sites, leading to more efficient degradation. Furthermore, the toxicity and Liquid chromatography-mass spectrometry (LC-MS) analysis revealed that after the treatment process, the wastewater exhibited 70% less toxicity than before treatment, with over 25 compounds degrading by more than 75%. At last, the prepared immobilized laccase had excellent recyclability retaining 70% activity up to 6 consecutive cycles. A straightforward manufacturing strategy and outstanding performance make the recyclable magnetic immobilized Laccase (Fe₃O₄ Laccase) an up-and-coming option for various environmental applications, particularly in water pollution control and treatment.

Keywords: kinetic, laccase enzyme, sonoenzymatic, ultrasound irradiation

Procedia PDF Downloads 68
277 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India

Authors: Zafar Tabrez, Nizamuddin Khan

Abstract:

Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.

Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern

Procedia PDF Downloads 313
276 Elderly in Sub Saharan Africa

Authors: Obinna Benedict Duru

Abstract:

This study focuses on the elderly and the challenges that confront them. The elderly are that particular segment of our population who by virtue of the aging process have attained the stage in most cases where they are confronted with the challenges of economic dependency and social marginality. These challenges are as a result of the physical and biological decline occasioned by social myths and realities which portray the elderly as a dependent population whose members could not and should not work and who need social assistance that the younger population is obliged to provide. From the moment of birth to the moment of death, our bodies are constantly changing. We are all enmeshed in the process of growing old, a transition from youthfulness to elderliness. In youth-oriented modern societies like ours, we tend to attach positive importance and significance to the biological changes that occur early in life and define later physical changes in negative terms. Children growing up and young adults receive more attention, greater responsibilities and more legal rights to reward them on their way. But few people are congratulated on getting old. We commiserate with people who are getting old and make jokes about their supposedly physical, mental and biological decline. Wrinkles, loss of weight and vitality are all parts of the aging process. In almost all parts of the world, earlier researches have shown that about fifty percent of the elderly who suffer from stroke, arthritis, senility and other age related diseases are the disengaged and neglected elderly. Rapid technological changes render the knowledge and skills of the elderly obsolete; education is geared toward the young and the generational competition for jobs leads to pressures on the elderly to retire. Control of initial resources are shifted to the middle-aged and older workers are pushed into positions of economic dependency. This study therefore, among other things tend to discover how some government policies have affected the elderly particularly in Africa. To discover the prospects and possibilities of the elderly for a better living. To make a comparison of the advances in healthcare giving made in the advanced western societies to the practice in Sub Saharan Africa etc. The hypotheses of this study include: that the elderly in Sub Saharan Africa are more vulnerable than their counterparts in Europe and America. The elderly are more prone to social isolation, and that the elderly are mostly affected by age-related sickness etc. With a survey method as the research design, and sample size of about 500 respondents,probability sampling technique was used. Data which were analyzed using chi-square and tables were collected through primary and secondary sources. The findings made include: that the elderly suffer pains of old age especially when disengaged from work or social activity. That loss of income condemn the elderly to a life of vegetable existence, and that those who do not have other means of re-integration usually see old age with regret and despair. It is therefore, recommended among other things that social welfare scheme and the process of re-integration at old age be introduced for the non pensionable elderly in Africa.

Keywords: elderly, social isolation, dependency, re-integration

Procedia PDF Downloads 337
275 Developing Dynamic Capabilities: The Case of Western Subsidiaries in Emerging Market

Authors: O. A. Adeyemi, M. O. Idris, W. A. Oke, O. T. Olorode, S. O. Alayande, A. E. Adeoye

Abstract:

The purpose of this paper is to investigate the process of capability building at subsidiary level and the challenges to such process. The relevance of external factors for capability development, have not been explicitly addressed in empirical studies. Though, internal factors, acting as enablers, have been more extensively studied. With reference to external factors, subsidiaries are actively influenced by specific characteristics of the host country, implying a need to become fully immersed in local culture and practices. Specifically, in MNCs, there has been a widespread trend in management practice to increase subsidiary autonomy,  with subsidiary managers being encouraged to act entrepreneurially, and to take advantage of host country specificity. As such, it could be proposed that: P1: The degree at which subsidiary management is connected to the host country, will positively influence the capability development process. Dynamic capabilities reside to a large measure with the subsidiary management team, but are impacted by the organizational processes, systems and structures that the MNC headquarter has designed to manage its business. At the subsidiary level, the weight of the subsidiary in the network, its initiative-taking and its profile building increase the supportive attention of the HQs and are relevant to the success of the process of capability building. Therefore, our second proposition is that: P2: Subsidiary role and HQ support are relevant elements in capability development at the subsidiary level. Design/Methodology/Approach: This present study will adopt the multiple case studies approach. That is because a case study research is relevant when addressing issues without known empirical evidences or with little developed prior theory. The key definitions and literature sources directly connected with operations of western subsidiaries in emerging markets, such as China, are well established. A qualitative approach, i.e., case studies of three western subsidiaries, will be adopted. The companies have similar products, they have operations in China, and both of them are mature in their internationalization process. Interviews with key informants, annual reports, press releases, media materials, presentation material to customers and stakeholders, and other company documents will be used as data sources. Findings: Western Subsidiaries in Emerging Market operate in a way substantially different from those in the West. What are the conditions initiating the outsourcing of operations? The paper will discuss and present two relevant propositions guiding that process. Practical Implications: MNCs headquarter should be aware of the potential for capability development at the subsidiary level. This increased awareness could induce consideration in headquarter about the possible ways of encouraging such known capability development and how to leverage these capabilities for better MNC headquarter and/or subsidiary performance. Originality/Value: The paper is expected to contribute on the theme: drivers of subsidiary performance with focus on emerging market. In particular, it will show how some external conditions could promote a capability-building process within subsidiaries.

Keywords: case studies, dynamic capability, emerging market, subsidiary

Procedia PDF Downloads 124
274 Trajectory Generation Procedure for Unmanned Aerial Vehicles

Authors: Amor Jnifene, Cedric Cocaud

Abstract:

One of the most constraining problems facing the development of autonomous vehicles is the limitations of current technologies. Guidance and navigation controllers need to be faster and more robust. Communication data links need to be more reliable and secure. For an Unmanned Aerial Vehicles (UAV) to be useful, and fully autonomous, one important feature that needs to be an integral part of the navigation system is autonomous trajectory planning. The work discussed in this paper presents a method for on-line trajectory planning for UAV’s. This method takes into account various constraints of different types including specific vectors of approach close to target points, multiple objectives, and other constraints related to speed, altitude, and obstacle avoidance. The trajectory produced by the proposed method ensures a smooth transition between different segments, satisfies the minimum curvature imposed by the dynamics of the UAV, and finds the optimum velocity based on available atmospheric conditions. Given a set of objective points and waypoints a skeleton of the trajectory is constructed first by linking all waypoints with straight segments based on the order in which they are encountered in the path. Secondly, vectors of approach (VoA) are assigned to objective waypoints and their preceding transitional waypoint if any. Thirdly, the straight segments are replaced by 3D curvilinear trajectories taking into account the aircraft dynamics. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircrafts, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircraft, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers.

Keywords: trajectory planning, unmanned autonomous air vehicle, vector of approach, waypoints

Procedia PDF Downloads 409
273 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide

Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu

Abstract:

This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.

Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide

Procedia PDF Downloads 238
272 A Study of the Effect of the Flipped Classroom on Mixed Abilities Classes in Compulsory Secondary Education in Italy

Authors: Giacoma Pace

Abstract:

The research seeks to evaluate whether students with impairments can achieve enhanced academic progress by actively engaging in collaborative problem-solving activities with teachers and peers, to overcome the obstacles rooted in socio-economic disparities. Furthermore, the research underscores the significance of fostering students' self-awareness regarding their learning process and encourages teachers to adopt a more interactive teaching approach. The research also posits that reducing conventional face-to-face lessons can motivate students to explore alternative learning methods, such as collaborative teamwork and peer education within the classroom. To address socio-cultural barriers it is imperative to assess their internet access and possession of technological devices, as these factors can contribute to a digital divide. The research features a case study of a Flipped Classroom Learning Unit, administered to six third-year high school classes: Scientific Lyceum, Technical School, and Vocational School, within the city of Turin, Italy. Data are about teachers and the students involved in the case study, some impaired students in each class, level of entry, students’ performance and attitude before using Flipped Classrooms, level of motivation, family’s involvement level, teachers’ attitude towards Flipped Classroom, goal obtained, the pros and cons of such activities, technology availability. The selected schools were contacted; meetings for the English teachers to gather information about their attitude and knowledge of the Flipped Classroom approach. Questionnaires to teachers and IT staff were administered. The information gathered, was used to outline the profile of the subjects involved in the study and was further compared with the second step of the study made up of a study conducted with the classes of the selected schools. The learning unit is the same, structure and content are decided together with the English colleagues of the classes involved. The pacing and content are matched in every lesson and all the classes participate in the same labs, use the same materials, homework, same assessment by summative and formative testing. Each step follows a precise scheme, in order to be as reliable as possible. The outcome of the case study will be statistically organised. The case study is accompanied by a study on the literature concerning EFL approaches and the Flipped Classroom. Document analysis method was employed, i.e. a qualitative research method in which printed and/or electronic documents containing information about the research subject are reviewed and evaluated with a systematic procedure. Articles in the Web of Science Core Collection, Education Resources Information Center (ERIC), Scopus and Science Direct databases were searched in order to determine the documents to be examined (years considered 2000-2022).

Keywords: flipped classroom, impaired, inclusivity, peer instruction

Procedia PDF Downloads 53
271 Quality of Life Responses of Students with Intellectual Disabilities Entering an Inclusive, Residential Post-Secondary Program

Authors: Mary A. Lindell

Abstract:

Adults with intellectual disabilities (ID) are increasingly attending postsecondary institutions, including inclusive residential programs at four-year universities. The legislation, national organizations, and researchers support developing postsecondary education (PSE) options for this historically underserved population. Simultaneously, researchers are assessing the quality of life indicators (QOL) for people with ID. This study explores the quality of life characteristics for individuals with ID entering a two-year PSE program. A survey aligned with the PSE program was developed and administered to participants before they began their college program (in future studies, the same survey will be administered 6 months and 1 year after graduating). Employment, income, and housing are frequently cited QOL measures. People with disabilities, and especially people with ID, are more likely to experience unemployment and low wages than people without disabilities. PSE improves adult outcomes (e.g., employment, income, housing) for people with and without disabilities. Similarly, adults with ID who attend PSE are more likely to be employed than their peers who do not attend PSE; however, adults with ID are least likely among their typical peers and other students with disabilities to attend PSE. There is increased attention to providing individuals with ID access to PSE and more research is needed regarding the characteristics of students attending PSE. This study focuses on the participants of a fully residential two-year program for individuals with ID. Students earn an Applied Skills Certificate while focusing on five benchmarks: self-care, home care, relationships, academics, and employment. To create a QOL measure, the goals of the PSE program were identified, and possible assessment items were initially selected from the National Core Indicators (NCI) and the National Transition Longitudinal Survey 2 (NTLS2) that aligned with the five program goals. Program staff and advisory committee members offered input on potential item alignment with program goals and expected value to students with ID in the program. National experts in researching QOL outcomes of people with ID were consulted and concurred that the items selected would be useful in measuring the outcomes of postsecondary students with ID. The measure was piloted, modified, and administered to incoming students with ID. Research questions: (1) In what ways are students with ID entering a two-year PSE program similar to individuals with ID who complete the NCI and NTLS2 surveys? (2) In what ways are students with ID entering a two-year PSE program different than individuals with ID who completed the NCI and NTLS2 surveys? The process of developing a QOL measure specific to a PSE program for individuals with ID revealed that many of the items in comprehensive national QOL measures are not relevant to stake-holders of this two-year residential inclusive PSE program. Specific responses of students with ID entering an inclusive PSE program will be presented as well as a comparison to similar items on national QOL measures. This study explores the characteristics of students with ID entering a residential, inclusive PSE program. This information is valuable for, researchers, educators, and policy makers as PSE programs become more accessible for individuals with ID.

Keywords: intellectual disabilities, inclusion, post-secondary education, quality of life

Procedia PDF Downloads 101
270 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework

Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua

Abstract:

There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.

Keywords: economic growth, embodied, input-output, technology

Procedia PDF Downloads 124
269 Phage Display-Derived Vaccine Candidates for Control of Bovine Anaplasmosis

Authors: Itzel Amaro-Estrada, Eduardo Vergara-Rivera, Virginia Juarez-Flores, Mayra Cobaxin-Cardenas, Rosa Estela Quiroz, Jesus F. Preciado, Sergio Rodriguez-Camarillo

Abstract:

Bovine anaplasmosis is an infectious, tick-borne disease caused mainly by Anaplasma marginale; typical signs include anemia, fever, abortion, weight loss, decreased milk production, jaundice, and potentially death. Sick bovine can recover when antibiotics are administered; however, it usually remains as carrier for life, being a risk of infection for susceptible cattle. Anaplasma marginale is an obligate intracellular Gram-negative bacterium with genetic composition highly diverse among geographical isolates. There are currently no vaccines fully effective against bovine anaplasmosis; therefore, the economic losses due to disease are present. Vaccine formulation became a hard task for several pathogens as Anaplasma marginale, but peptide-based vaccines are an interesting proposal way to induce specific responses. Phage-displayed peptide libraries have been proved one of the most powerful technologies for identifying specific ligands. Screening of these peptides libraries is also a tool for studying interactions between proteins or peptides. Thus, it has allowed the identification of ligands recognized by polyclonal antiserums, and it has been successful for the identification of relevant epitopes in chronic diseases and toxicological conditions. Protective immune response to bovine anaplasmosis includes high levels of immunoglobulins subclass G2 (IgG2) but not subclass IgG1. Therefore, IgG2 from the serum of protected bovine can be useful to identify ligands, which can be part of an immunogen for cattle. In this work, phage display random peptide library Ph.D. ™ -12 was incubating with IgG2 or blood sera of immunized bovines against A. marginale as targets. After three rounds of biopanning, several candidates were selected for additional analysis. Subsequently, their reactivity with sera immunized against A. marginale, as well as with positive and negative sera to A. marginale was evaluated by immunoassays. A collection of recognized peptides tested by ELISA was generated. More than three hundred phage-peptides were separately evaluated against molecules which were used during panning. At least ten different peptides sequences were determined from their nucleotide composition. In this approach, three phage-peptides were selected by their binding and affinity properties. In the case of the development of vaccines or diagnostic reagents, it is important to evaluate the immunogenic and antigenic properties of the peptides. Immunogenic in vitro and in vivo behavior of peptides will be assayed as synthetic and as phage-peptide for to determinate their vaccine potential. Acknowledgment: This work was supported by grant SEP-CONACYT 252577 given to I. Amaro-Estrada.

Keywords: bovine anaplasmosis, peptides, phage display, veterinary vaccines

Procedia PDF Downloads 143
268 The Rite of Jihadification in ISIS Modified Video Games: Mass Deception and Dialectic of Religious Regression in Technological Progression

Authors: Venus Torabi

Abstract:

ISIS, the terrorist organization, modified two videogames, ARMA III and Grand Theft Auto 5 (2013) as means of online recruitment and ideological propaganda. The urge to study the mechanism at work, whether it has been successful or not, derives (Digital) Humanities experts to explore how codes of terror, Islamic ideology and recruitment strategies are incorporated into the ludic mechanics of videogames. Another aspect of the significance lies in the fact that this is a latent problem that has not been fully addressed in an interdisciplinary framework prior to this study, to the best of the researcher’s knowledge. Therefore, due to the complexity of the subject, the present paper entangles with game studies, philosophical and religious poles to form the methodology of conducting the research. As a contextualized epistemology of such exploitation of videogames, the core argument is building on the notion of “Culture Industry” proposed by Theodore W. Adorno and Max Horkheimer in Dialectic of Enlightenment (2002). This article posits that the ideological underpinnings of ISIS’s cause corroborated by the action-bound mechanics of the videogames are in line with adhering to the Islamic Eschatology as a furnishing ground and an excuse in exercising terrorism. It is an account of ISIS’s modification of the videogames, a tool of technological progression to practice online radicalization. Dialectically, this practice is packed up in rhetoric for recognizing a religious myth (the advent of a savior), as a hallmark of regression. The study puts forth that ISIS’s wreaking havoc on the world, both in reality and within action videogames, is negotiating the process of self-assertion in the players of such videogames (by assuming one’s self a member of terrorists) that leads to self-annihilation. It tries to unfold how ludic Mod videogames are misused as tools of mass deception towards ethnic cleansing in reality and line with the distorted Eschatological myth. To conclude, this study posits videogames to be a new avenue of mass deception in the framework of the Culture Industry. Yet, this emerges as a two-edged sword of mass deception in ISIS’s modification of videogames. It shows that ISIS is not only trying to hijack the minds through online/ludic recruitment, it potentially deceives the Muslim communities or those prone to radicalization into believing that it's terrorist practices are preparing the world for the advent of a religious savior based on Islamic Eschatology. This is to claim that the harsh actions of the videogames are potentially breeding minds by seeds of terrorist propaganda and numbing them to violence. The real world becomes an extension of that harsh virtual environment in a ludic/actual continuum, the extension that is contributing to the mass deception mechanism of the terrorists, in a clandestine trend.

Keywords: culture industry, dialectic, ISIS, islamic eschatology, mass deception, video games

Procedia PDF Downloads 137
267 Performing Arts and Performance Art: Interspaces and Flexible Transitions

Authors: Helmi Vent

Abstract:

This four-year artistic research project has set the goal of exploring the adaptable transitions within the realms between the two genres. This paper will single out one research question from the entire project for its focus, namely on how and under what circumstances such transitions between a reinterpretation and a new creation can take place during the performative process. The film documentation that accompany the project were produced at the Mozarteum University in Salzburg, Austria, as well as on diverse everyday stages at various locations. The model institution that hosted the project is the LIA – Lab Inter Arts, under the direction of Helmi Vent. LIA combines artistic research with performative applications. The project participants are students from various artistic fields of study. The film documentation forms a central platform for the entire project. They function as audiovisual records of performative performative origins and development processes, while serving as the basis for analysis and evaluation, including the self-evaluation of the recorded material and they also serve as illustrative and discussion material in relation to the topic of this paper. Regarding the “interspaces” and variable 'transitions': The performing arts in the western cultures generally orient themselves toward existing original compositions – most often in the interconnected fields of music, dance and theater – with the goal of reinterpreting and rehearsing a pre-existing score, choreographed work, libretto or script and presenting that respective piece to an audience. The essential tool in this reinterpretation process is generally the artistic ‘language’ performers learn over the course of their main studies. Thus, speaking is combined with singing, playing an instrument is combined with dancing, or with pictorial or sculpturally formed works, in addition to many other variations. If the Performing Arts would rid themselves of their designations from time to time and initially follow the emerging, diffusely gliding transitions into the unknown, the artistic language the performer has learned then becomes a creative resource. The illustrative film excerpts depicting the realms between Performing Arts and Performance Art present insights into the ways the project participants embrace unknown and explorative processes, thus allowing the genesis of new performative designs or concepts to be invented between the participants’ acquired cultural and artistic skills and their own creations – according to their own ideas and issues, sometimes with their direct involvement, fragmentary, provisional, left as a rough draft or fully composed. All in all, it is an evolutionary process and its key parameters cannot be distilled down to their essence. Rather, they stem from a subtle inner perception, from deep-seated emotions, imaginations, and non-discursive decisions, which ultimately result in an artistic statement rising to the visible and audible surface. Within these realms between performing arts and performance art and their extremely flexible transitions, exceptional opportunities can be found to grasp and realise art itself as a research process.

Keywords: art as research method, Lab Inter Arts ( LIA ), performing arts, performance art

Procedia PDF Downloads 272
266 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 267
265 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 228
264 Impact of Material Chemistry and Morphology on Attrition Behavior of Excipients during Blending

Authors: Sri Sharath Kulkarni, Pauline Janssen, Alberto Berardi, Bastiaan Dickhoff, Sander van Gessel

Abstract:

Blending is a common process in the production of pharmaceutical dosage forms where the high shear is used to obtain a homogenous dosage. The shear required can lead to uncontrolled attrition of excipients and affect API’s. This has an impact on the performance of the formulation as this can alter the structure of the mixture. Therefore, it is important to understand the driving mechanisms for attrition. The aim of this study was to increase the fundamental understanding of the attrition behavior of excipients. Attrition behavior of the excipients was evaluated using a high shear blender (Procept Form-8, Zele, Belgium). Twelve pure excipients are tested, with morphologies varying from crystalline (sieved), granulated to spray dried (round to fibrous). Furthermore, materials include lactose, microcrystalline cellulose (MCC), di-calcium phosphate (DCP), and mannitol. The rotational speed of the blender was set at 1370 rpm to have the highest shear with a Froude (Fr) number 9. Varying blending times of 2-10 min were used. Subsequently, after blending, the excipients were analyzed for changes in particle size distribution (PSD). This was determined (n = 3) by dry laser diffraction (Helos/KR, Sympatec, Germany). Attrition was found to be a surface phenomenon which occurs in the first minutes of the high shear blending process. An increase of blending time above 2 mins showed no change in particle size distribution. Material chemistry was identified as a key driver for differences in the attrition behavior between different excipients. This is mainly related to the proneness to fragmentation, which is known to be higher for materials such as DCP and mannitol compared to lactose and MCC. Secondly, morphology also was identified as a driver of the degree of attrition. Granular products consisting of irregular surfaces showed the highest reduction in particle size. This is due to the weak solid bonds created between the primary particles during the granulation process. Granular DCP and mannitol show a reduction of 80-90% in x10(µm) compared to a 20-30% drop for granular lactose (monohydrate and anhydrous). Apart from the granular lactose, all the remaining morphologies of lactose (spray dried-round, sieved-tomahawk, milled) show little change in particle size. Similar observations have been made for spray-dried fibrous MCC. All these morphologies have little irregular or sharp surfaces and thereby are less prone to fragmentation. Therefore, products containing brittle materials such as mannitol and DCP are more prone to fragmentation when exposed to shear. Granular products with irregular surfaces lead to an increase in attrition. While spherical, crystalline, or fibrous morphologies show reduced impact during high shear blending. These changes in size will affect the functionality attributes of the formulation, such as flow, API homogeneity, tableting, formation of dust, etc. Hence it is important for formulators to fully understand the excipients to make the right choices.

Keywords: attrition, blending, continuous manufacturing, excipients, lactose, microcrystalline cellulose, shear

Procedia PDF Downloads 112
263 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic

Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez

Abstract:

The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.

Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis

Procedia PDF Downloads 344
262 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain

Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero

Abstract:

Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.

Keywords: environmental externalities, intermodal transport, perishable food, transit time

Procedia PDF Downloads 98
261 Understanding the Cause(S) of Social, Emotional and Behavioural Difficulties of Adolescents with ADHD and Its Implications for the Successful Implementation of Intervention(S)

Authors: Elisavet Kechagia

Abstract:

Due to the interplay of different genetic and environmental risk factors and its heterogeneous nature, the concept of attention deficit hyperactivity disorder (ADHD) has shaped controversy and conflicts, which have been, in turn, reflected in the controversial arguments about its treatment. Taking into account recent well evidence-based researches suggesting that ADHD is a condition, in which biopsychosocial factors are all weaved together, the current paper explores the multiple risk-factors that are likely to influence ADHD, with a particular focus on adolescents with ADHD who might experience comorbid social, emotional and behavioural disorders (SEBD). In the first section of this paper, the primary objective was to investigate the conflicting ideas regarding the definition, diagnosis and treatment of ADHD at an international level as well as to critically examine and identify the limitations of the two most prevailing sets of diagnostic criteria that inform current diagnosis, the American Psychiatric Association’s (APA) diagnostic scheme, DSM-V, and the World Health Organisation’s (WHO) classification of diseases, ICD-10. Taking into consideration the findings of current longitudinal studies on ADHD association with high rates of comorbid conditions and social dysfunction, in the second section the author moves towards an investigation of the transitional points −physical, psychological and social ones− that students with ADHD might experience during early adolescence, as informed by neuroscience and developmental contextualism theory. The third section is an exploration of the different perspectives of ADHD as reflected in individuals’ with ADHD self-reports and the KENT project’s findings on school staff’s attitudes and practices. In the last section, given the high rates of SEBDs in adolescents with ADHD, it is examined how cognitive behavioural therapy (CBT), coupled with other interventions, could be effective in ameliorating anti-social behaviours and/or other emotional and behavioral difficulties of students with ADHD. The findings of a range of randomised control studies indicate that CBT might have positive outcomes in adolescents with multiple behavioural problems, hence it is suggested to be considered both in schools and other community settings. Finally, taking into account the heterogeneous nature of ADHD, the different biopsychosocial and environmental risk factors that take place during adolescence and the discourse and practices concerning ADHD and SEBD, it is suggested how it might be possible to make sense of and meaningful improvements to the education of adolescents with ADHD within a multi-modal and multi-disciplinary whole-school approach that addresses the multiple problems that not only students with ADHD but also their peers might experience. Further research that would be based on more large-scale controls and would investigate the effectiveness of various interventions, as well as the profiles of those students who have benefited from particular approaches and those who have not, will generate further evidence concerning the psychoeducation of adolescents with ADHD allowing for generalised conclusions to be drawn.

Keywords: adolescence, attention deficit hyperctivity disorder, cognitive behavioural theory, comorbid social emotional behavioural disorders, treatment

Procedia PDF Downloads 320
260 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 281