Search results for: translation as conversion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1739

Search results for: translation as conversion

89 Nursery Treatments May Improve Restoration Outcomes by Reducing Seedling Transplant Shock

Authors: Douglas E. Mainhart, Alejandro Fierro-Cabo, Bradley Christoffersen, Charlotte Reemts

Abstract:

Semi-arid ecosystems across the globe have faced land conversion for agriculture and resource extraction activities, posing a threat to the important ecosystem services they provide. Revegetation-centered restoration efforts in these regions face low success rates due to limited soil water availability and high temperatures leading to elevated seedling mortality after planting. Typical methods to alleviate these stresses require costly post-planting interventions aimed at improving soil moisture status. We set out to evaluate the efficacy of applying in-nursery treatments to address transplant shock. Four native Tamaulipan thornscrub species were compared. Three treatments were applied: elevated CO2, drought hardening (four-week exposure each), and antitranspirant foliar spray (the day prior to planting). Our goal was to answer two primary questions: (1) Do treatments improve survival and growth of seedlings in the early period post-planting? (2) If so, what underlying physiological changes are associated with this improved performance? To this end, we measured leaf gas exchange (stomatal conductance, light saturated photosynthetic rate, water use efficiency), leaf morphology (specific leaf area), and osmolality before and upon the conclusion of treatments. A subset of seedlings from all treatments have been planted, which will be monitored in coming months for in-field survival and growth.First month field survival for all treatment groups were high due to ample rainfall following planting (>85%). Growth data was unreliable due to high herbivory (68% of all sampled plants). While elevated CO2 had infrequent or no detectable influence on all aspects of leaf gas exchange, drought hardening reduced stomatal conductance in three of the four species measured without negatively impacting photosynthesis. Both CO2 and drought hardening elevated leaf osmolality in two species. Antitranspirant application significantly reduced conductance in all species for up to four days and reduced photosynthesis in two species. Antitranspirants also increased the variability of water use efficiency compared to controls. Collectively, these results suggest that antitranspirants and drought hardening are viable treatments for reducing short-term water loss during the transplant shock period. Elevated CO2, while not effective at reducing water loss, may be useful for promoting more favorable water status via osmotic adjustment. These practices could improve restoration outcomes in Tamaulipan thornscrub and other semi-arid systems. Further research should focus on evaluating combinations of these treatments and their species-specific viability.

Keywords: conservation, drought conditioning, semi-arid restoration, plant physiology

Procedia PDF Downloads 62
88 Blade-Coating Deposition of Semiconducting Polymer Thin Films: Light-To-Heat Converters

Authors: M. Lehtihet, S. Rosado, C. Pradère, J. Leng

Abstract:

Poly(3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT: PSS), is a polymer mixture well-known for its semiconducting properties and is widely used in the coating industry for its visible transparency and high electronic conductivity (up to 4600 S/cm) as a transparent non-metallic electrode and in organic light-emitting diodes (OLED). It also possesses strong absorption properties in the Near Infra-Red (NIR) range (λ ranging between 900 nm to 2.5 µm). In the present work, we take advantage of this absorption to explore its potential use as a transparent light-to-heat converter. PEDOT: PSS aqueous dispersions are deposited onto a glass substrate using a blade-coating technique in order to produce uniform coatings with controlled thicknesses ranging in ≈ 400 nm to 2 µm. Blade-coating technique allows us good control of the deposit thickness and uniformity by the tuning of several experimental conditions (blade velocity, evaporation rate, temperature, etc…). This liquid coating technique is a well-known, non-expensive technique to realize thin film coatings on various substrates. For coatings on glass substrates destined to solar insulation applications, the ideal coating would be made of a material able to transmit all the visible range while reflecting the NIR range perfectly, but materials possessing similar properties still have unsatisfactory opacity in the visible too (for example, titanium dioxide nanoparticles). NIR absorbing thin films is a more realistic alternative for such an application. Under solar illumination, PEDOT: PSS thin films heat up due to absorption of NIR light and thus act as planar heaters while maintaining good transparency in the visible range. Whereas they screen some NIR radiation, they also generate heat which is then conducted into the substrate that re-emits this energy by thermal emission in every direction. In order to quantify the heating power of these coatings, a sample (coating on glass) is placed in a black enclosure and illuminated with a solar simulator, a lamp emitting a calibrated radiation very similar to the solar spectrum. The temperature of the rear face of the substrate is measured in real-time using thermocouples and a black-painted Peltier sensor measures the total entering flux (sum of transmitted and re-emitted fluxes). The heating power density of the thin films is estimated from a model of the thin film/glass substrate describing the system, and we estimate the Solar Heat Gain Coefficient (SHGC) to quantify the light-to-heat conversion efficiency of such systems. Eventually, the effect of additives such as dimethyl sulfoxide (DMSO) or optical scatterers (particles) on the performances are also studied, as the first one can alter the IR absorption properties of PEDOT: PSS drastically and the second one can increase the apparent optical path of light within the thin film material.

Keywords: PEDOT: PSS, blade-coating, heat, thin-film, Solar spectrum

Procedia PDF Downloads 138
87 Linguistic Cyberbullying, a Legislative Approach

Authors: Simona Maria Ignat

Abstract:

Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.

Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter

Procedia PDF Downloads 59
86 Using Q Methodology to Capture Attitudes about Academic Resilience in an Online Postgraduate Psychology Course

Authors: Eleanor F. Willard

Abstract:

The attrition rate on distance learning courses can be high. This research examines how online students often react when faced with poor results. Using q methodology, it was found that the emotional response level and the type of social support sought by students were key influences on their attitude to failure. As educational and psychological researchers, we are adept at measuring learning and achievement, but examining attitudes towards barriers to learning are not so well researched. The distance learning student has differing needs from onsite learners and, as the attrition rate is notoriously high in the online student population, examining learners’ attitude towards adversity and barriers is important. Self-report measures such as questionnaires are useful in terms of ascertaining levels of constructs such as resilience and academic confidence. Interviewing, too, can gain in depth detail of the opinions of such a population, but only in individuals. The aim of this research was to ascertain what the feelings and attitudes of online students were when faced with a setback. This was achieved using q methodology due to its use of both quantitative and qualitative methodology and its suitability for exploratory research. The emphasis with this methodology is the attitudes, not the individuals. The work was focused upon a population of distance learning students who attended a school on site for one week as part of their studies. They were engaged in a psychology masters conversion course and, as such, were graduate students. The Q sort had 30 items taken from the Academic Resilience Scale (ARS-30). The scale items represent three constructs; perseverance, reflecting (including adaptive help-seeking) and negative affect. These are widely acknowledged as being relevant concepts underpinning psychological resilience. The q sort was conducted with 19 students in total. This is done by participants arranging statement cards regarding how similar to themselves they believe each statement to be. This was done after reading a vignette describing an experience of academic failure. Commonalities and differences between the sorts from all participants are then analyzed in terms of correlations and response patterns. Following data collection, the participants' responses were initially analyzed and the key perspectives (factors) to emerge were labelled ‘persevering individuals’ and ‘emotional networkers’. The differences between the two perspectives centre around the level of emotion felt when faced with barriers and the extent that students enlist the help of others inside and outside of the university. The dominant factor to emerge from the sorts of ‘persevering individuals’ demonstrated that many distance learners are tenacious. However, for other students, the level of emotional and social support is pivotal in helping them complete their studies when facing adversity. This was demonstrated by the ‘emotional networkers’ perspective. This research forms a starting point for further work on engaging and retaining online students at university and can potentially provide insight into how universities can lower attrition rates on distance learning courses.

Keywords: academic resilience, distance learning, online learning, q methodology

Procedia PDF Downloads 106
85 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms

Authors: Joy Chadwick

Abstract:

There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.

Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge

Procedia PDF Downloads 47
84 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 42
83 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.

Authors: Narjes Abbasabadi

Abstract:

As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.

Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)

Procedia PDF Downloads 230
82 Co₂Fe LDH on Aromatic Acid Functionalized N Doped Graphene: Hybrid Electrocatalyst for Oxygen Evolution Reaction

Authors: Biswaranjan D. Mohapatra, Ipsha Hota, Swarna P. Mantry, Nibedita Behera, Kumar S. K. Varadwaj

Abstract:

Designing highly active and low-cost oxygen evolution (2H₂O → 4H⁺ + 4e⁻ + O₂) electrocatalyst is one of the most active areas of advanced energy research. Some precious metal-based electrocatalysts, such as IrO₂ and RuO₂, have shown excellent performance for oxygen evolution reaction (OER); however, they suffer from high-cost and low abundance which limits their applications. Recently, layered double hydroxides (LDHs), composed of layers of divalent and trivalent transition metal cations coordinated to hydroxide anions, have gathered attention as an alternative OER catalyst. However, LDHs are insulators and coupled with carbon materials for the electrocatalytic applications. Graphene covalently doped with nitrogen has been demonstrated to be an excellent electrocatalyst for energy conversion technologies such as; oxygen reduction reaction (ORR), oxygen evolution reaction (OER) & hydrogen evolution reaction (HER). However, they operate at high overpotentials, significantly above the thermodynamic standard potentials. Recently, we reported remarkably enhanced catalytic activity of benzoate or 1-pyrenebutyrate functionalized N-doped graphene towards the ORR in alkaline medium. The molecular and heteroatom co-doping on graphene is expected to tune the electronic structure of graphene. Therefore, an innovative catalyst architecture, in which LDHs are anchored on aromatic acid functionalized ‘N’ doped graphene may presumably boost the OER activity to a new benchmark. Herein, we report fabrication of Co₂Fe-LDH on aromatic acid (AA) functionalized ‘N’ doped reduced graphene oxide (NG) and studied their OER activities in alkaline medium. In the first step, a novel polyol method is applied for synthesis of AA functionalized NG, which is well dispersed in aqueous medium. In the second step, Co₂Fe LDH were grown on AA functionalized NG by co-precipitation method. The hybrid samples are abbreviated as Co₂Fe LDH/AA-NG, where AA is either Benzoic acid or 1, 3-Benzene dicarboxylic acid (BDA) or 1, 3, 5 Benzene tricarboxylic acid (BTA). The crystal structure and morphology of the samples were characterized by X-ray diffraction (XRD), scanning electron microscope (SEM) and transmission electron microscope (TEM). These studies confirmed the growth of layered single phase LDH. The electrocatalytic OER activity of these hybrid materials was investigated by rotating disc electrode (RDE) technique on a glassy carbon electrode. The linear sweep voltammetry (LSV) on these catalyst samples were taken at 1600rpm. We observed significant OER performance enhancement in terms of onset potential and current density on Co₂Fe LDH/BTA-NG hybrid, indicating the synergic effect. This exploration of molecular functionalization effect in doped graphene and LDH system may provide an excellent platform for innovative design of OER catalysts.

Keywords: π-π functionalization, layered double hydroxide, oxygen evolution reaction, reduced graphene oxide

Procedia PDF Downloads 181
81 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 267
80 Monitoring the Responses to Nociceptive Stimuli During General Anesthesia Based on Electroencephalographic Signals in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway (LMA)

Authors: Ofelia Loani Elvir Lazo, Roya Yumul, Sevan Komshian, Ruby Wang, Jun Tang

Abstract:

Background: Monitoring the anti-nociceptive drug effect is useful because a sudden and strong nociceptive stimulus may result in untoward autonomic responses and muscular reflex movements. Monitoring the anti-nociceptive effects of perioperative medications has long been desiredas a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively.To this end, electroencephalogram (EEG) based tools includingBIS and qCON were designed to provide information about the depth of sedation whileqNOXwas produced to informon the degree of antinociception.The goal of this study was to compare the reliability of qCON/qNOX to BIS asspecific indicators of response to nociceptive stimulation. Methods: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board(IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed o62n all patientsprior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. Results: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from74±13 mm Hg at baseline to 84± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76±12 BPM at baseline to 80±13BPM during noxious stimuli[p=0.078] respectively). Conclusion: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indices.

Keywords: antinociception, bispectral index (BIS), general anesthesia, laryngeal mask airway, qCON/qNOX

Procedia PDF Downloads 73
79 Treatment with Triton-X 100: An Enhancement Approach for Cardboard Bioprocessing

Authors: Ahlam Said Al Azkawi, Nallusamy Sivakumar, Saif Nasser Al Bahri

Abstract:

Diverse approaches and pathways are under development with the determination to develop cellulosic biofuels and other bio-products eventually at commercial scale in “bio-refineries”; however, the key challenge is mainly the high level of complexity in processing the feedstock which is complicated and energy consuming. To overcome the complications in utilizing the naturally occurring lignocellulose biomass, using waste paper as a feedstock for bio-production may solve the problem. Besides being abundant and cheap, bioprocessing of waste paper has evolved in response to the public concern from rising landfill cost from shrinking landfill capacity. Cardboard (CB) is one of the major components of municipal solid waste and one of the most important items to recycle. Although 50-70% of cardboard constitute is known to be cellulose and hemicellulose, the presence of lignin around them cause hydrophobic cross-link which physically obstructs the hydrolysis by rendering it resistant to enzymatic cleavage. Therefore, pretreatment is required to disrupt this resistance and to enhance the exposure of the targeted carbohydrates to the hydrolytic enzymes. Several pretreatment approaches have been explored, and the best ones would be those can influence cellulose conversion rates and hydrolytic enzyme performance with minimal or less cost and downstream processes. One of the promising strategies in this field is the application of surfactants, especially non-ionic surfactants. In this study, triton-X 100 was used as surfactants to treat cardboard prior enzymatic hydrolysis and compare it with acid treatment using 0.1% H2SO4. The effect of the surfactant enhancement was evaluated through its effect on hydrolysis rate in respect to time in addition to evaluating the structural changes and modification by scanning electron microscope (SEM) and X-ray diffraction (XRD) and through compositional analysis. Further work was performed to produce ethanol from CB treated with triton-X 100 via separate hydrolysis and fermentation (SHF) and simultaneous saccharification and fermentation (SSF). The hydrolysis studies have demonstrated enhancement in saccharification by 35%. After 72 h of hydrolysis, a saccharification rate of 98% was achieved from CB enhanced with triton-X 100, while only 89 of saccharification achieved from acid pre-treated CB. At 120 h, the saccharification % exceeded 100 as reducing sugars continued to increase with time. This enhancement was not supported by any significant changes in the cardboard content as the cellulose, hemicellulose and lignin content remained same after treatment, but obvious structural changes were observed through SEM images. The cellulose fibers were clearly exposed with very less debris and deposits compared to cardboard without triton-X 100. The XRD pattern has also revealed the ability of the surfactant in removing calcium carbonate, a filler found in waste paper known to have negative effect on enzymatic hydrolysis. The cellulose crystallinity without surfactant was 73.18% and reduced to 66.68% rendering it more amorphous and susceptible to enzymatic attack. Triton-X 100 has proved to effectively enhance CB hydrolysis and eventually had positive effect on the ethanol yield via SSF. Treating cardboard with only triton-X 100 was a sufficient treatment to enhance the enzymatic hydrolysis and ethanol production.

Keywords: cardboard, enhancement, ethanol, hydrolysis, treatment, Triton-X 100

Procedia PDF Downloads 125
78 Improving Preconception Health and Lifestyle Behaviours through Digital Health Intervention: The OptimalMe Program

Authors: Bonnie R. Brammall, Rhonda M. Garad, Helena J. Teede, Cheryce L. Harrison

Abstract:

Introduction: Reproductive aged women are at high-risk for accelerated weight gain and obesity development, with pregnancy recognised as a critical contributory life phase. Healthy lifestyle interventions during the preconception and antenatal period improve maternal and infant health outcomes. Yet, interventions from preconception through to postpartum and translation and implementation into real-world healthcare settings remain limited. OptimalMe is a randomised, hybrid implementation effectiveness study of evidence-based healthy lifestyle intervention. Here, we report engagement, acceptability of the intervention during preconception, and self-reported behaviour change outcomes as a result of the preconception phase of the intervention. Methods: Reproductive aged women who upgraded their private health insurance to include pregnancy and birth cover, signalling a pregnancy intention, were invited to participate. Women received access to an online portal with preconception health and lifestyle modules, goal-setting and behaviour change tools, monthly SMS messages, and two coaching sessions (randomised to video or phone) prior to pregnancy. Results: Overall n=527 expressed interest in participating. Of these, n=33 did not meet inclusion criteria, n=8 were not contactable for eligibility screening, and n=177 failed to engage after the screening, leaving n=309 who were enrolled in OptimalMe and randomised to intervention delivery method. Engagement with coaching sessions dropped by 25% for session two, with no difference between intervention groups. Women had a mean (SD) age of 31.7 (4.3) years and, at baseline, a self-reported mean BMI of 25.7 (6.1) kg/m², with 55.8% (n=172) of a healthy BMI. Behaviour was sub-optimal with infrequent self-weighing (38.1%), alcohol consumption prevalent (57.1%), sub-optimal pre-pregnancy supplementation (61.5%), and incomplete medical screening. Post-intervention 73.2% of women reported engagement with a GP for preconception care and improved lifestyle behaviour (85.5%), since starting OptimalMe. Direct pre-and-post comparison of individual participant data showed that of 322 points of potential change (up-to-date cervical screening, elimination of high-risk behaviours [alcohol, drugs, smoking], uptake of preconception supplements and improved weighing habits) 158 (49.1%) points of change were achieved. Health coaching sessions were found to improve accountability and confidence, yet further personalisation and support were desired. Engagement with video and phone sessions was comparable, having similar impacts on behaviour change, and both methods were well accepted and increased women's accountability. Conclusion: A low-intensity digital health and lifestyle program with embedded health coaching can improve the uptake of preconception care and lead to self-reported behaviour change. This is the first program of its kind to reach an otherwise healthy population of women planning a pregnancy. Women who were otherwise healthy showed divergence from preconception health and lifestyle objectives and benefited from the intervention. OptimalMe shows promising results for population-based behaviour change interventions that can improve preconception lifestyle habits and increase engagement with clinical health care for pregnancy preparation.

Keywords: preconception, pregnancy, preventative health, weight gain prevention, self-management, behaviour change, digital health, telehealth, intervention, women's health

Procedia PDF Downloads 71
77 Electrochemical Performance of Femtosecond Laser Structured Commercial Solid Oxide Fuel Cells Electrolyte

Authors: Mohamed A. Baba, Gazy Rodowan, Brigita Abakevičienė, Sigitas Tamulevičius, Bartlomiej Lemieszek, Sebastian Molin, Tomas Tamulevičius

Abstract:

Solid oxide fuel cells (SOFC) efficiently convert hydrogen to energy without producing any disturbances or contaminants. The core of the cell is electrolyte. For improving the performance of electrolyte-supported cells, it is desirable to extend the available exchange surface area by micro-structuring of the electrolyte with laser-based micromachining. This study investigated the electrochemical performance of cells micro machined using a femtosecond laser. Commercial ceramic SOFC (Elcogen, AS) with a total thickness of 400 μm was structured by 1030 nm wavelength Yb: KGW fs-laser Pharos (Light Conversion) using 100 kHz repetition frequency and 290 fs pulse length light by scanning with the galvanometer scanner (ScanLab) and focused with a f-Theta telecentric lens (SillOptics). The sample height was positioned using a motorized z-stage. The microstructures were formed using a laser spiral trepanning in Ni/YSZ anode supported membrane at the central part of the ceramic piece of 5.5 mm diameter at active area of the cell. All surface was drilled with 275 µm diameter holes spaced by 275 µm. The machining processes were carried out under ambient conditions. The microstructural effects of the femtosecond laser treatment on the electrolyte surface were investigated prior to the electrochemical characterisation using a scanning electron microscope (SEM) Quanta 200 FEG (FEI). The Novo control Alpha-A was used for electrochemical impedance spectroscopy on a symmetrical cell configuration with an excitation amplitude of 25 mV and a frequency range of 1 MHz to 0.1 Hz. The fuel cell characterization of the cell was examined on open flanges test setup by Fiaxell. Using nickel mesh on the anode side and au mesh on the cathode side, the cell was electrically linked. The cell was placed in a Kittec furnace with a Process IDentifier temperature controller. The wires were connected to a Solartron 1260/1287 frequency analyzer for the impedance and current-voltage characterization. In order to determine the impact of the anode's microstructure on the performance of the commercial cells, the acquired results were compared to cells with unstructured anode. Geometrical studies verified that the depth of the -holes increased linearly according to laser energy and scanning times. On the other hand, it reduced as the scanning speed increased. The electrochemical analysis demonstrates that the open circuit voltage OCV values of the two cells are equal. Further, the modified cell's initial slope reduces to 0.209 from 0.253 of the unmodified cell, revealing that the surface modification considerably decreases energy loss. Plus, the maximum power density for the cell with the microstructure and the reference cell respectively, are 1.45 and 1.16 Wcm⁻².

Keywords: electrochemical performance, electrolyte-supported cells, laser micro-structuring, solid oxide fuel cells

Procedia PDF Downloads 44
76 Analysis of the Potential of Biomass Residues for Energy Production and Applications in New Materials

Authors: Sibele A. F. Leite, Bernno S. Leite, José Vicente H. D´Angelo, Ana Teresa P. Dell’Isola, Julio CéSar Souza

Abstract:

The generation of bioenergy is one of the oldest and simplest biomass applications and is one of the safest options for minimizing emissions of greenhouse gasses and replace the use of fossil fuels. In addition, the increasing development of technologies for energy biomass conversion parallel to the advancement of research in biotechnology and engineering has enabled new opportunities for exploitation of biomass. Agricultural residues offer great potential for energy use, and Brazil is in a prominent position in the production and export of agricultural products such as banana and rice. Despite the economic importance of the growth prospects of these activities and the increasing of the agricultural waste, they are rarely explored for energy and production of new materials. Brazil products almost 10.5 million tons/year of rice husk and 26.8 million tons/year of banana stem. Thereby, the aim of this study was to analysis the potential of biomass residues for energy production and applications in new materials. Rice husk (specify the type) and banana stem (specify the type) were characterized by physicochemical analyses using the following parameters: organic carbon, nitrogen (NTK), proximate analyses, FT-IR spectroscopy, thermogravimetric analyses (TG), calorific values and silica content. Rice husk and banana stem presented attractive superior calorific (from 11.5 to 13.7MJ/kg), and they may be compared to vegetal coal (21.25 MJ/kg). These results are due to the high organic matter content. According to the proximate analysis, biomass has high carbon content (fixed and volatile) and low moisture and ash content. In addition, data obtained by Walkley–Black method point out that most of the carbon present in the rice husk (50.5 wt%) and in banana stalk (35.5 wt%) should be understood as organic carbon (readily oxidizable). Organic matter was also detected by Kjeldahl method which gives the values of nitrogen (especially on the organic form) for both residues: 3.8 and 4.7 g/kg of rice husk and banana stem respectively. TG and DSC analyses support the previous results, as they can provide information about the thermal stability of the samples allowing a correlation between thermal behavior and chemical composition. According to the thermogravimetric curves, there were two main stages of mass-losses. The first and smaller one occurred below 100 °C, which was suitable for water losses and the second event occurred between 200 and 500 °C which indicates decomposition of the organic matter. At this broad peak, the main loss was between 250-350 °C, and it is because of sugar decomposition (components readily oxidizable). Above 350 °C, mass loss of the biomass may be associated with lignin decomposition. Spectroscopic characterization just provided qualitative information about the organic matter, but spectra have shown absorption bands around 1030 cm-1 which may be identified as species containing silicon. This result is expected for the rice husk and deserves further investigation to the stalk of banana, as it can bring a different perspective for this biomass residue.

Keywords: rice husk, banana stem, bioenergy, renewable feedstock

Procedia PDF Downloads 250
75 Investigating Sub-daily Responses of Water Flow of Trees in Tropical Successional Forests in Thailand

Authors: Pantana Tor-Ngern

Abstract:

In the global water cycle, tree water use (Tr) largely contributes to evapotranspiration which is the total amount of water evaporated from terrestrial ecosystems to the atmosphere, regulating climates. Tree water use responds to environmental factors, including atmospheric humidity and sunlight (represented by vapor pressure deficit or VPD and photosynthetically active radiation or PAR, respectively) and soil moisture. In forests, Tr responses to such factors depend on species and their spatial and temporal variations. Tropical forests in Southeast Asia (SEA) have experienced land-use conversion from abandoned agricultural practices, resulting in patches of forests at different stages including old-growth and secondary forests. Because the inherent structures, such as canopy height and tree density, significantly vary among forests at different stages and can strongly affect their respective microclimate, Tr and its responses to changing environmental conditions in successional forests may differ. Daily and seasonal variations in the environmental factors may exert significant impacts on the respective Tr patterns. Extrapolating Tr data from short periods of days to longer periods of seasons or years can be complex and is important for estimating long-term ecosystem water use which often includes normal and abnormal climatic conditions. Thus, this study aims to investigate the diurnal variation of Tr, using measured sap flux density (JS) data, with changes in VPD in eight evergreen tree species in an old-growth forest (hereafter OF; >200 years old) and a young forest (hereafter YF, <10 years old) in Khao Yai National Park, Thailand. The studied species included Sysygium syzygoides, Aquilaria crassna, Cinnamomum subavenium, Nephelium melliferum, Altingia excelsa in OF, and Syzygium nervosum and Adinandra integerrima in YF. Only Sysygium antisepticum was found in both forest stages. Specifically, hysteresis, which indicates the asymmetrical changes of JS in response to changing VPD across daily timescale, was examined in these species. Results showed no hysteresis in all species in OF, except Altingia excelsa which exhibited a 3-hour delayed JS response to VPD. In contrast, JS of all species in YF displayed one-hour delayed responses to VPD. The OF species that showed no hysteresis indicated their well-coupling of their canopies with the atmosphere, facilitating the gas exchange which is essential for tree growth. The delayed responses in Altingia excelsa in OF and all species in YF were associated with higher JS in the morning than that in the afternoon. This implies that these species were sensitive to drying air, closing stomata relatively rapidly compared to the decreasing atmospheric humidity (VPD). Such behavior is often observed in trees growing in dry environments. This study suggests that detailed investigation of JS at sub-daily timescales is imperative for better understanding of mechanistic responses of trees to the changing climate, which will benefit the improvement of earth system models.

Keywords: sap flow, tropical forest, forest succession, thermal dissipcation probe

Procedia PDF Downloads 39
74 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks

Authors: Mazarine Roquet, Pierre Dewallef

Abstract:

The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.

Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating

Procedia PDF Downloads 51
73 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 242
72 Plasma Chemical Gasification of Solid Fuel with Mineral Mass Processing

Authors: V. E. Messerle, O. A. Lavrichshev, A. B. Ustimenko

Abstract:

Currently and in the foreseeable future (up to 2100), the global economy is oriented to the use of organic fuel, mostly, solid fuels, the share of which constitutes 40% in the generation of electric power. Therefore, the development of technologies for their effective and environmentally friendly application represents a priority problem nowadays. This work presents the results of thermodynamic and experimental investigations of plasma technology for processing of low-grade coals. The use of this technology for producing target products (synthesis gas, hydrogen, technical carbon, and valuable components of mineral mass of coals) meets the modern environmental and economic requirements applied to basic industrial sectors. The plasma technology of coal processing for the production of synthesis gas from the coal organic mass (COM) and valuable components from coal mineral mass (CMM) is highly promising. Its essence is heating the coal dust by reducing electric arc plasma to the complete gasification temperature, when the COM converts into synthesis gas, free from particles of ash, nitrogen oxides and sulfur. At the same time, oxides of the CMM are reduced by the carbon residue, producing valuable components, such as technical silicon, ferrosilicon, aluminum and carbon silicon, as well as microelements of rare metals, such as uranium, molybdenum, vanadium, titanium. Thermodynamic analysis of the process was made using a versatile computation program TERRA. Calculations were carried out in the temperature range 300 - 4000 K and a pressure of 0.1 MPa. Bituminous coal with the ash content of 40% and the heating value 16,632 kJ/kg was taken for the investigation. The gaseous phase of coal processing products includes, basically, a synthesis gas with a concentration of up to 99 vol.% at 1500 K. CMM components completely converts from the condensed phase into the gaseous phase at a temperature above 2600 K. At temperatures above 3000 K, the gaseous phase includes, basically, Si, Al, Ca, Fe, Na, and compounds of SiO, SiH, AlH, and SiS. The latter compounds dissociate into relevant elements with increasing temperature. Complex coal conversion for the production of synthesis gas from COM and valuable components from CMM was investigated using a versatile experimental plant the main element of which was plug and flow plasma reactor. The material and thermal balances helped to find the integral indicators for the process. Plasma-steam gasification of the low-grade coal with CMM processing gave the synthesis gas yield 95.2%, the carbon gasification 92.3%, and coal desulfurization 95.2%. The reduced material of the CMM was found in the slag in the form of ferrosilicon as well as silicon and iron carbides. The maximum reduction of the CMM oxides was observed in the slag from the walls of the plasma reactor in the areas with maximum temperatures, reaching 47%. The thusly produced synthesis gas can be used for synthesis of methanol, or as a high-calorific reducing gas instead of blast-furnace coke as well as power gas for thermal power plants. Reduced material of CMM can be used in metallurgy.

Keywords: gasification, mineral mass, organic mass, plasma, processing, solid fuel, synthesis gas, valuable components

Procedia PDF Downloads 590
71 Budget Impact Analysis of a Stratified Treatment Cascade for Hepatitis C Direct Acting Antiviral Treatment in an Asian Middle-Income Country through the Use of Compulsory and Voluntary Licensing Options

Authors: Amirah Azzeri, Fatiha H. Shabaruddin, Scott A. McDonald, Rosmawati Mohamed, Maznah Dahlui

Abstract:

Objective: A scaled-up treatment cascade with direct-acting antiviral (DAA) therapy is necessary to achieve global WHO targets for hepatitis C virus (HCV) elimination in Malaysia. Recently, limited access to Sofosbuvir/Daclatasvir (SOF/DAC) is available through compulsory licensing, with future access to Sofosbuvir/Velpatasvir (SOF/VEL) expected through voluntary licensing due to recent agreements. SOF/VEL has superior clinical outcomes, particularly for cirrhotic stages, but has higher drug acquisition costs compared to SOF/DAC. It has been proposed that a stratified treatment cascade might be the most cost-efficient approach for Malaysia whereby all HCV patients are treated with SOF/DAC except for patients with cirrhosis who are treated with SOF/VEL. This study aimed to conduct a five-year budget impact analysis from the provider perspective of the proposed stratified treatment cascade for HCV treatment in Malaysia. Method: A disease progression model that was developed based on model-predicted HCV epidemiology data in Malaysia was used for the analysis, where all HCV patients in scenario A were treated with SOF/DAC for all disease stages while in scenario B, SOF/DAC was used only for non-cirrhotic patients and SOF/VEL was used for the cirrhotic patients. The model projections estimated the annual numbers of patients in care and the numbers of patients to be initiated on DAA treatment nationally. Healthcare costs associated with DAA therapy and disease stage monitoring was included to estimate the downstream cost implications. For scenario B, the estimated treatment uptake of SOF/VEL for cirrhotic patients were 25%, 50%, 75%, 100% and 100% for 2018, 2019, 2020, 2021 and 2022 respectively. Healthcare costs were estimated based on standard clinical pathways for DAA treatment described in recent guidelines. All costs were reported in US dollars (conversion rate US$1=RM4.09, the price year 2018). Scenario analysis was conducted for 5% and 10% reduction of SOF/VEL acquisition cost anticipated from the competitive market pricing of generic DAA in Malaysia. Results: The stratified treatment cascade with SOF/VEL in Scenario B was found to be cost-saving compared to Scenario A. A substantial portion of the cost reduction was due to the costs associated with DAA therapy which resulted in USD 40 thousand (year 1) to USD 443 thousand (year 5) savings annually, with cumulative savings of USD 1.1 million after 5 years. Cost reductions for disease stage monitoring were seen in year three onwards which resulted in cumulative savings of USD 1.1 thousand. Scenario analysis estimated cumulative savings of USD 1.24 to USD 1.35 million when the acquisition cost of SOF/VEL was reduced. Conclusion: A stratified treatment cascade with SOF/VEL was expected to be cost-saving and can results in a budget impact reduction in overall healthcare expenditure in Malaysia compared to treatment with SOF/DAC. The better clinical efficacy with SOF/VEL is expected to halt patients’ HCV disease progression and may reduce downstream costs of treating advanced disease stages. The findings of this analysis may be useful to inform healthcare policies for HCV treatment in Malaysia.

Keywords: Malaysia, direct acting antiviral, compulsory licensing, voluntary licensing

Procedia PDF Downloads 145
70 Cereal Bioproducts Conversion to Higher Value Feed by Using Pediococcus Strains Isolated from Spontaneous Fermented Cereal, and Its Influence on Milk Production of Dairy Cattle

Authors: Vita Krungleviciute, Rasa Zelvyte, Ingrida Monkeviciene, Jone Kantautaite, Rolandas Stankevicius, Modestas Ruzauskas, Elena Bartkiene

Abstract:

The environmental impact of agricultural bioproducts from the processing of food crops is an increasing concern worldwide. Currently, cereal bran has been used as a low-value ingredient for both human consumption and animal feed. The most popular bioprocessing technologies for cereal bran nutritional and technological functionality increasing are enzymatic processing and fermentation, and the most popular starters in fermented feed production are lactic acid bacteria (LAB) including pediococci. However, the ruminant digestive system is unique, there are billions of microorganisms which help the cow to digest and utilize nutrients in the feed. To achieve efficient feed utilization and high milk yield, the microorganisms must have optimal conditions, and the disbalance of this system is highly undesirable. Pediococcus strains Pediococcus acidilactici BaltBio01 and Pediococcus pentosaceus BaltBio02 from spontaneous fermented rye were isolated (by rep – PCR method), identified, and characterized by their growth (by Thermo Bioscreen C automatic turbidometer), acidification rate (2 hours in 2.5 pH), gas production (Durham method), and carbohydrate metabolism (by API 50 CH test ). Antimicrobial activities of isolated pediococcus against variety of pathogenic and opportunistic bacterial strains previously isolated from diseased cattle, and their resistance to antibiotics were evaluated (EFSA-FEEDAP method). The isolated pediococcus strains were cultivated in barley/wheat bran (90 / 10, m / m) substrate, and developed supplements, with high content of valuable pediococcus, were used for Lithuanian black and white dairy cows feeding. In addition, the influence of supplements on milk production and composition was determined. Milk composition was evaluated by the LactoScope FTIR” FT1.0. 2001 (Delta Instruments, Holland). P. acidilactici BaltBio01 and P. pentosaceus BaltBio02 demonstrated versatile carbohydrate metabolism, grown at 30°C and 37°C temperatures, and acidic tolerance. Isolated pediococcus strains showed to be non resistant to antibiotics, and having antimicrobial activity against undesirable microorganisms. By barley/wheat bran utilisation using fermentation with selected pediococcus strains, it is possible to produce safer (reduced Enterobacteriaceae, total aerobic bacteria, yeast and mold count) feed stock with high content of pediococcus. Significantly higher milk yield (after 33 days) by using pediococcus supplements mix for dairy cows feeding could be obtained, while similar effect by using separate strains after 66 days of feeding could be achieved. It can be stated that barley/wheat bran could be used for higher value feed production in order to increase milk production. Therefore, further research is needed to identify what is the main mechanism of the positive action.

Keywords: barley/wheat bran, dairy cattle, fermented feed, milk, pediococcus

Procedia PDF Downloads 288
69 Effect of Dietary Inclusion of Moringa oleifera Leaf Meal on Blood Biochemical Changes and Lipid Profile of Vanaraja Chicken in Tropics

Authors: Kaushalendra Kumar, Abhishek Kumar, Chandra Moni, Sanjay Kumar, P. K. Singh, Ajeet Kumar

Abstract:

Present study investigated the dietary inclusion of Moringa oleifera leaf meal (MOLM) on production efficiency, hemato-biochemical profile and economy of Vanaraja birds under tropical condition. Experiment was conducted for a period of 56 days on 300 Vanaraja birds randomly divided in to five different experimental groups including control of 60 birds each group replicated with 20 chicks in each replicate. T1, T2, T3, T4, and T5 were offered with 0, 5, 10, 15, and 20% Moringa oleifera leaf meal along with basal ration. All the standard managemental practices were followed during experimental period including vaccination schedule. Locally available Moringa oleifera leaves were harvested at mature stage and allowed to dry under shady and aerated conditions. Thereafter, dried leaves were milled to make a leaf meal and stored in the airtight nylon bags to avoid any possible contamination from foreign material and use for experiment. Production parameters were calculated based on the amount of feed consumed and weight gain every weeks. The body weight gain of T2 group was significantly (P < 0.05) higher side whereas T3 group was comparable with control. The feed conversion ratio for T2 group was found to be significantly (P < 0.05) lower than all other treatment groups, while none of the group was comparable with each other. At the end of the experiment blood samples were collected from birds for haematology study while serum biochemistry performed using spectrophotometer following statndard protocols. The haematological attributes were significantly (P > 0.05) not differed among the groups. However, serum biochemistry showed significant reduction (P < 0.05) of blood urea nitrogen, uric acid and creatinine level with higher level of MOLM diet, indicates better utilization of protein supplemented through MOLM. The total cholesterol and triglyceride level was declined significantly (P < 0.05) as compare to control group with increased level of MOLM in basal diet, decreasing trend of serum cholesterol noted. However, value of HDL for T3 group was highest and for T1 group was lowest but no significant difference (P < 0.05) found among the groups. It might be due to presence of β-sitosterol a bioactive compound present in MOLM which causes lowering of plasma concentration of LDL. During experiment total, LDL and VLDL level was found to be decreased significantly (P < 0.05) as compare to control group. It was observed that the production efficiency of birds significantly improved with 5% followed by 10% Moringa oleifera leaf meal among the treatment groups. However, the maximum profit per kg live weight was noted in 10 % level and least profit observed in 20% MOLM fed group. It was concluded that the dietary inclusion of MOLM improved overall performances without affecting metabolic status and effective in reducing cholesterol level reflects healthy chicken production for human consumption.

Keywords: hemato biochemistry, Moringa oleifera leaf meal, performance, Vanaraja birds

Procedia PDF Downloads 182
68 Effect of Supplementation with Fresh Citrus Pulp on Growth Performance, Slaughter Traits and Mortality in Guinea Pigs

Authors: Carlos Minguez, Christian F. Sagbay, Erika E. Ordoñez

Abstract:

Guinea pigs (Cavia porcellus) play prominent roles as experimental models for medical research and as pets. However, in developing countries like South America, the Philippines, and sub-Saharan Africa, the meat of guinea pigs is an economic source of animal protein for the poor and malnourished humans because guinea pigs are mainly fed with forage and do not compete directly with human beings for food resources, such as corn or wheat. To achieve efficient production of guinea pigs, it is essential to provide insurance against vitamin C deficiency. The objective of this research was to investigate the effect of the partial replacement of alfalfa with fresh citrus pulp (Citrus sinensis) in a diet of guinea pigs on the growth performance, slaughter traits and mortality during the fattening period (between 20 and 74 days of age). A total of 300 guinea pigs were housed in collective cages of about ten animals (2 x 1 x 0.4 m) and were distributed into two completely randomized groups. Guinea pigs in both groups were fed ad libitum, with a standard commercial pellet diet (10 MJ of digestible energy/kg, 17% crude protein, 11% crude fiber, and 4.5% crude fat). Control group was supplied with fresh alfalfa as forage. In the treatment group, 30% of alfalfa was replaced by fresh citrus pulp. Growth traits, including body weight (BW), average daily gain (ADG), feed intake (FI), and feed conversion ratio (FCR), were measured weekly. On day 74, the animals were slaughtered, and slaughter traits, including live weight at slaughter (LWS), full gastrointestinal tract weight (FGTW), hot carcass weight (with head; HCW), cold carcass weight (with head; CCW), drip loss percentage (DLP) and dressing out carcass yield percentage (DCY), were evaluated. Contrasts between groups were obtained by calculated generalized least squares values. Mortality was evaluated by Fisher's exact test due to low numbers in some cells. In the first week, there were significant differences in the growth traits BW, ADG, FI, and FCR, which were superior in control group. These differences may have been due to the origin of the young guinea pigs, which, before weaning, were all raised without fresh citrus pulp, and they were not familiarized with the new supplement. In the second week, treatment group had significantly increased ADG compared with control group, which may have been the result of a process of compensatory growth. During subsequent weeks, no significant differences were observed between animals raised in the two groups. Neither were any significant differences observed across the total fattening period. No significant differences in slaughter traits or mortality rate were observed between animals from the two groups. In conclusion, although there were no significant differences in growth performance, slaughter traits, or mortality, the use of fresh citrus pulp is recommended. Fresh citrus pulp is a by-product of orange juice industry and it is cheap or free. Forage made with fresh citrus pulp could reduce about of 30 % the quantity of alfalfa in guinea pig for meat and as consequence, reduce the production costs.

Keywords: fresh citrus, growth, Guinea pig, mortality

Procedia PDF Downloads 171
67 Development of Alternative Fuels Technologies for Transportation

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej

Abstract:

Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.

Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)

Procedia PDF Downloads 157
66 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz

Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.

Keywords: characterization, dissolution, Efavirenz, solid dispersions

Procedia PDF Downloads 611
65 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 258
64 Carbon Footprint Assessment and Application in Urban Planning and Geography

Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim

Abstract:

Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.

Keywords: carbon footprint, case study, geography, urban planning

Procedia PDF Downloads 269
63 Provotyping Futures Through Design

Authors: Elisabetta Cianfanelli, Maria Claudia Coppola, Margherita Tufarelli

Abstract:

Design practices throughout history return a critical understanding of society since they always conveyed values and meanings aimed at (re)framing reality by acting in everyday life: here, design gains cultural and normative character, since its artifacts, services, and environments hold the power to intercept, influence and inspire thoughts, behaviors, and relationships. In this sense, design can be persuasive, engaging in the production of worlds and, as such, acting in the space between poietics and politics so that chasing preferable futures and their aesthetic strategies becomes a matter full of political responsibility. This resonates with contemporary landscapes of radical interdependencies challenging designers to focus on complex socio-technical systems and to better support values such as equality and justice for both humans and nonhumans. In fact, it is in times of crisis and structural uncertainty that designers turn into visionaries at the service of society, envisioning scenarios and dwelling in the territories of imagination to conceive new fictions and frictions to be added to the thickness of the real. Here, design’s main tasks are to develop options, to increase the variety of choices, to cultivate its role as scout, jester, agent provocateur for the public, so that design for transformation emerges, making an explicit commitment to society, furthering structural change in a proactive and synergic manner. However, the exploration of possible futures is both a trap and a trampoline because, although it embodies a radical research tool, it raises various challenges when the design process goes further in the translation of such vision into an artefact - whether tangible or intangible -, through which it should deliver that bit of future into everyday experience. Today designers are making up new tools and practices to tackle current wicked challenges, combining their approaches with other disciplinary domains: futuring through design, thus, rises from research strands like speculative design, design fiction, and critical design, where the blending of design approaches and futures thinking brings an action-oriented and product-based approach to strategic insights. The contribution positions at the intersection of those approaches, aiming at discussing design’s tools of inquiry through which it is possible to grasp the agency of imagined futures into present time. Since futures are not remote, they actively participate in creating path-dependent decisions, crystallized into designed artifacts par excellence, prototypes, and their conceptual other, provotypes: with both being unfinished and multifaceted, the first ones are effective in reiterating solutions to problems already framed, while the second ones prove to be useful when the goal is to explore and break boundaries, bringing closer preferable futures. By focusing on some provotypes throughout history which challenged markets and, above all, social and cultural structures, the contribution’s final aim is understanding the knowledge produced by provotypes, understood as design spaces where designs’s humanistic side might help developing a deeper sensibility about uncertainty and, most of all, the unfinished feature of societal artifacts, whose experimentation would leave marks and traces to build up f(r)ictions as vital sparks of plurality and collective life.

Keywords: speculative design, provotypes, design knowledge, political theory

Procedia PDF Downloads 110
62 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing

Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas

Abstract:

This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.

Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language

Procedia PDF Downloads 375
61 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens

Authors: R. Tamborrino, F. Rinaudo

Abstract:

Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.

Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities

Procedia PDF Downloads 172
60 Applying Napoleoni's 'Shell-State' Concept to Jihadist Organisations's Rise in Mali, Nigeria and Syria/Iraq, 2011-2015

Authors: Francesco Saverio Angiò

Abstract:

The Islamic State of Iraq and the Levant / Syria (ISIL/S), Al-Qaeda in the Islamic Maghreb (AQIM) and People Committed to the Propagation of the Prophet's Teachings and Jihad, also known as ‘Boko Haram’ (BH), have fought successfully against Syria and Iraq, Mali, Nigeria’s government, respectively. According to Napoleoni, the ‘shell-state’ concept can explain the economic dimension and the financing model of the ISIL insurgency. However, she argues that AQIM and BH did not properly plan their financial model. Consequently, her idea would not be suitable to these groups. Nevertheless, AQIM and BH’s economic performances and their (short) territorialisation suggest that their financing models respond to a well-defined strategy, which they were able to adapt to new circumstances. Therefore, Napoleoni’s idea of ‘shell-state’ can be applied to the three jihadist armed groups. In the last five years, together with other similar entities, ISIL/S, AQIM and BH have been fighting against governments with insurgent tactics and terrorism acts, conquering and ruling a quasi-state; a physical space they presented as legitimate territorial entity, thanks to a puritan version of the Islamic law. In these territories, they have exploited the traditional local economic networks. In addition, they have contributed to the development of legal and illegal transnational business activities. They have also established a justice system and created an administrative structure to supply services. Napoleoni’s ‘shell-state’ can describe the evolution of ISIL/S, AQIM and BH, which has switched from an insurgency to a proto or a quasi-state entity, enjoying a significant share of power over territories and populations. Napoleoni first developed and applied the ‘Shell-state’ concept to describe the nature of groups such as the Palestine Liberation Organisation (PLO), before using it to explain the expansion of ISIL. However, her original conceptualisation emphasises on the economic dimension of the rise of the insurgency, focusing on the ‘business’ model and the insurgents’ financing management skills, which permits them to turn into an organisation. However, the idea of groups which use, coordinate and grab some territorial economic activities (at the same time, encouraging new criminal ones), can also be applied to administrative, social, infrastructural, legal and military levels of their insurgency, since they contribute to transform the insurgency to the same extent the economic dimension does. In addition, according to Napoleoni’s view, the ‘shell-state’ prism is valid to understand the ISIL/S phenomenon, because the group has carefully planned their financial steps. Napoleoni affirmed that ISIL/S carries out activities in order to promote their conversion from a group relying on external sponsors to an entity that can penetrate and condition local economies. On the contrary, ‘shell-state’ could not be applied to AQIM or BH, which are acting more like smugglers. Nevertheless, despite its failure to control territories, as ISIL has been able to do, AQIM and BH have responded strategically to their economic circumstances and have defined specific dynamics to ensure a flow of stable funds. Therefore, Napoleoni’s theory is applicable.

Keywords: shell-state, jihadist insurgency, proto or quasi-state entity economic planning, strategic financing

Procedia PDF Downloads 323