Search results for: management information system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30095

Search results for: management information system

335 Differential Expression Analysis of Busseola fusca Larval Transcriptome in Response to Cry1Ab Toxin Challenge

Authors: Bianca Peterson, Tomasz J. Sańko, Carlos C. Bezuidenhout, Johnnie Van Den Berg

Abstract:

Busseola fusca (Fuller) (Lepidoptera: Noctuidae), the maize stem borer, is a major pest in sub-Saharan Africa. It causes economic damage to maize and sorghum crops and has evolved non-recessive resistance to genetically modified (GM) maize expressing the Cry1Ab insecticidal toxin. Since B. fusca is a non-model organism, very little genomic information is publicly available, and is limited to some cytochrome c oxidase I, cytochrome b, and microsatellite data. The biology of B. fusca is well-described, but still poorly understood. This, in combination with its larval-specific behavior, may pose problems for limiting the spread of current resistant B. fusca populations or preventing resistance evolution in other susceptible populations. As part of on-going research into resistance evolution, B. fusca larvae were collected from Bt and non-Bt maize in South Africa, followed by RNA isolation (15 specimens) and sequencing on the Illumina HiSeq 2500 platform. Quality of reads was assessed with FastQC, after which Trimmomatic was used to trim adapters and remove low quality, short reads. Trinity was used for the de novo assembly, whereas TransRate was used for assembly quality assessment. Transcript identification employed BLAST (BLASTn, BLASTp, and tBLASTx comparisons), for which two libraries (nucleotide and protein) were created from 3.27 million lepidopteran sequences. Several transcripts that have previously been implicated in Cry toxin resistance was identified for B. fusca. These included aminopeptidase N, cadherin, alkaline phosphatase, ATP-binding cassette transporter proteins, and mitogen-activated protein kinase. MEGA7 was used to align these transcripts to reference sequences from Lepidoptera to detect mutations that might potentially be contributing to Cry toxin resistance in this pest. RSEM and Bioconductor were used to perform differential gene expression analysis on groups of B. fusca larvae challenged and unchallenged with the Cry1Ab toxin. Pairwise expression comparisons of transcripts that were at least 16-fold expressed at a false-discovery corrected statistical significance (p) ≤ 0.001 were extracted and visualized in a hierarchically clustered heatmap using R. A total of 329,194 transcripts with an N50 of 1,019 bp were generated from the over 167.5 million high-quality paired-end reads. Furthermore, 110 transcripts were over 10 kbp long, of which the largest one was 29,395 bp. BLAST comparisons resulted in identification of 157,099 (47.72%) transcripts, among which only 3,718 (2.37%) were identified as Cry toxin receptors from lepidopteran insects. According to transcript expression profiles, transcripts were grouped into three subclusters according to the similarity of their expression patterns. Several immune-related transcripts (pathogen recognition receptors, antimicrobial peptides, and inhibitors) were up-regulated in the larvae feeding on Bt maize, indicating an enhanced immune status in response to toxin exposure. Above all, extremely up-regulated arylphorin genes suggest that enhanced epithelial healing is one of the resistance mechanisms employed by B. fusca larvae against the Cry1Ab toxin. This study is the first to provide a resource base and some insights into a potential mechanism of Cry1Ab toxin resistance in B. fusca. Transcriptomic data generated in this study allows identification of genes that can be targeted by biotechnological improvements of GM crops.

Keywords: epithelial healing, Lepidoptera, resistance, transcriptome

Procedia PDF Downloads 166
334 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 164
333 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure

Authors: Sara Saboonian, Pierre Filion

Abstract:

The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.

Keywords: ecosystem services, green infrastructure, intensification, planning

Procedia PDF Downloads 325
332 Addressing the Biocide Residue Issue in Museum Collections Already in the Planning Phase: An Investigation Into the Decontamination of Biocide Polluted Museum Collections Using the Temperature and Humidity Controlled Integrated Contamination Manageme

Authors: Nikolaus Wilke, Boaz Paz

Abstract:

Museum staff, conservators, restorers, curators, registrars, art handlers but potentially also museum visitors are often exposed to the harmful effects of biocides, which have been applied to collections in the past for the protection and preservation of cultural heritage. Due to stable light, moisture, and temperature conditions, the biocidal active ingredients were preserved for much longer than originally assumed by chemists, pest controllers, and museum scientists. Given the requirements to minimize the use and handling of toxic substances and the obligations of employers regarding safe working environments for their employees, but also for visitors, the museum sector worldwide needs adequate decontamination solutions. Today there are millions of contaminated objects in museums. This paper introduces the results of a systematic investigation into the reduction rate of biocide contamination in various organic materials that were treated with the humidity and temperature controlled ICM (Integrated Contamination Management) method. In the past, collections were treated with a wide range, at times even with a combination of toxins, either preventively or to eliminate active insect or fungi infestations. It was only later that most of those toxins were recognized as CMR (cancerogenic mutagen reprotoxic) substances. Among them were numerous chemical substances that are banned today because of their toxicity. While the biocidal effect of inorganic salts such as arsenic (arsenic(III) oxide), sublimate (mercury(II) chloride), copper oxychloride (basic copper chloride) and zinc chloride was known very early on, organic tar distillates such as paradichlorobenzene, carbolineum, creosote and naphthalene were increasingly used from the 19th century onwards, especially as wood preservatives. With the rapid development of organic synthesis chemistry in the 20th century and the development of highly effective warfare agents, pesticides and fungicides, these substances were replaced by chlorogenic compounds (e.g. γ-hexachlorocyclohexane (lindane), dichlorodiphenyltrichloroethane (DDT), pentachlorophenol (PCP), hormone-like derivatives such as synthetic pyrethroids (e.g., permethrin, deltamethrin, cyfluthrin) and phosphoric acid esters (e.g., dichlorvos, chlorpyrifos). Today we know that textile artifacts (costumes, uniforms, carpets, tapestries), wooden objects, herbaria, libraries, archives and historical wall decorations made of fabric, paper and leather were also widely treated with toxic inorganic and organic substances. The migration (emission) of pollutants from the contaminated objects leads to continuous (secondary) contamination and accumulation in the indoor air and dust. It is important to note that many of mentioned toxic substances are also material-damaging; they cause discoloration and corrosion. Some, such as DDT, form crystals, which in turn can cause micro tectonic, destructive shifting, for example, in paint layers. Museums must integrate sustainable solutions to address the residual biocide problems already in the planning phase. Gas and dust phase measurements and analysis must become standard as well as methods of decontamination.

Keywords: biocides, decontamination, museum collections, toxic substances in museums

Procedia PDF Downloads 86
331 Key Aroma Compounds as Predictors of Pineapple Sensory Quality

Authors: Jenson George, Thoa Nguyen, Garth Sanewski, Craig Hardner, Heather Eunice Smyth

Abstract:

Pineapple (Ananas comosus), with its unique sweet flavour, is one of the most popular tropical, non-climacteric fruits consumed worldwide. It is also the third most important tropical fruit in world production. In Australia, 99% of the pineapple production is from the Queensland state due to the favourable subtropical climatic conditions. The flavourful fruit is known to contain around 500 volatile organic compounds (VOC) at varying concentrations and greatly contribute to the flavour quality of pineapple fruit by providing distinct aroma sensory properties that are sweet, fruity, tropical, pineapple-like, caramel-like, coconut-like, etc. The aroma of pineapple is one of the important factors attracting consumers and strengthening the marketplace. To better understand the aroma of Australian-grown pineapples, the matrix-matched Gas chromatography–mass spectrometry (GC-MS), Head Space - Solid-phase microextraction (HS-SPME), Stable-isotope dilution analysis (SIDA) method was developed and validated. The developed method represents a significant improvement over current methods with the incorporation of multiple external reference standards, multiple isotopes labeled internal standards, and a matching model system of pineapple fruit matrix. This method was employed to quantify 28 key aroma compounds in more than 200 genetically diverse pineapple varieties from a breeding program. The Australian pineapple cultivars varied in content and composition of free volatile compounds, which were predominantly comprised of esters, followed by terpenes, alcohols, aldehydes, and ketones. Using selected commercial cultivars grown in Australia, and by employing the sensorial analysis, the appearance (colour), aroma (intensity, sweet, vinegar/tang, tropical fruits, floral, coconut, green, metallic, vegetal, fresh, peppery, fermented, eggy/sulphurous) and texture (crunchiness, fibrousness, and juiciness) were obtained. Relationships between sensory descriptors and volatiles were explored by applying multivariate analysis (PCA) to the sensorial and chemical data. The key aroma compounds of pineapple exhibited a positive correlation with corresponding sensory properties. The sensory and volatile data were also used to explore genetic diversity in the breeding population. GWAS was employed to unravel the genetic control of the pineapple volatilome and its interplay with fruit sensory characteristics. This study enhances our understanding of pineapple aroma (flavour) compounds, their biosynthetic pathways and expands breeding option for pineapple cultivars. This research provides foundational knowledge to support breeding programs, post-harvest and target market studies, and efforts to optimise the flavour of commercial pineapple varieties and their parent lines to produce better tasting fruits for consumers.

Keywords: Ananas comosus, pineapple, flavour, volatile organic compounds, aroma, Gas chromatography–mass spectrometry (GC-MS), Head Space - Solid-phase microextraction (HS-SPME), Stable-isotope dilution analysis (SIDA).

Procedia PDF Downloads 16
330 Discriminant Shooting-Related Statistics between Winners and Losers 2023 FIBA U19 Basketball World Cup

Authors: Navid Ebrahmi Madiseh, Sina Esfandiarpour-Broujeni, Rahil Razeghi

Abstract:

Introduction: Quantitative analysis of game-related statistical parameters is widely used to evaluate basketball performance at both individual and team levels. Non-free throw shooting plays a crucial role as the primary scoring method, holding significant importance in the game's technical aspect. It has been explored the predictive value of game-related statistics in relation to various contextual and situational variables. Many similarities and differences also have been found between different age groups and levels of competition. For instance, in the World Basketball Championships after the 2010 rule change, 2-point field goals distinguished winners from losers in women's games but not in men's games, and the impact of successful 3-point field goals on women's games was minimal. The study aimed to identify and compare discriminant shooting-related statistics between winning and losing teams in men’s and women’s FIBA-U19-Basketball-World-Cup-2023 tournaments. Method: Data from 112 observations (2 per game) of 16 teams (for each gender) in the FIBA-U19-Basketball-World-Cup-2023 were selected as samples. The data were obtained from the official FIBA website using Python. Specific information was extracted, organized into a DataFrame, and consisted of twelve variables, including shooting percentages, attempts, and scoring ratio for 3-pointers, mid-range shots, paint shots, and free throws. Made% = scoring type successful attempts/scoring type total attempts¬ (1)Free-throw-pts% (free throw score ratio) = (free throw score/total score) ×100 (2)Mid-pts% (mid-range score ratio) = (mid-range score/total score) ×100 (3) Paint-pts% (paint score ratio) = (Paint score/total score) ×100 (4) 3p_pts% (three-point score ratio) = (three-point score/total score) ×100 (5) Independent t-tests were used to examine significant differences in shooting-related statistical parameters between winning and losing teams for both genders. Statistical significance was p < 0.05. All statistical analyses were completed with SPSS, Version 18. Results: The results showed that 3p-made%, mid-pts%, paint-made%, paint-pts%, mid-attempts, and paint-attempts were significantly different between winners and losers in men (t=-3.465, P<0.05; t=3.681, P<0.05; t=-5.884, P<0.05; t=-3.007, P<0.05; t=2.549, p<0.05; t=-3.921, P<0.05). For women, significant differences between winners and losers were found for 3p-made%, 3p-pts%, paint-made%, and paint-attempt (t=-6.429, P<0.05; t=-1.993, P<0.05; t=-1.993, P<0.05; t=-4.115, P<0.05; t=02.451, P<0.05). Discussion: The research aimed to compare shooting-related statistics between winners and losers in men's and women's teams at the FIBA-U19-Basketball-World-Cup-2023. Results indicated that men's winners excelled in 3p-made%, paint-made%, paint-pts%, paint-attempts, and mid-attempt, consistent with previous studies. This study found that losers in men’s teams had higher mid-pts% than winners, which was inconsistent with previous findings. It has been indicated that winners tend to prioritize statistically efficient shots while forcing the opponent to take mid-range shots. In women's games, significant differences in 3p-made%, 3p-pts%, paint-made%, and paint-attempts were observed, indicating that winners relied on riskier outside scoring strategies. Overall, winners exhibited higher accuracy in paint and 3P shooting than losers, but they also relied more on outside offensive strategies. Additionally, winners acquired a higher ratio of their points from 3P shots, which demonstrates their confidence in their skills and willingness to take risks at this competitive level.

Keywords: gender, losers, shoot-statistic, U19, winners

Procedia PDF Downloads 68
329 Simultech - Innovative Country-Wide Ultrasound Training Center

Authors: Yael Rieder, Yael Gilboa, S. O. Adva, Efrat Halevi, Ronnie Tepper

Abstract:

Background: Operation of ultrasound equipment is a core skill for many clinical specialties. As part of the training program at -Simultech- a simulation center for Ob\Gyn at the Meir Medical Center, Israel, teaching how to operate ultrasound equipment requires dealing with misunderstandings of spatial and 3D orientation, failure of the operator to hold a transducer correctly, and limited ability to evaluate the data on the screen. We have developed a platform intended to endow physicians and sonographers with clinical and operational skills of obstetric ultrasound. Simultech's simulations are focused on medical knowledge, risk management, technology operations and physician-patient communication. The simulations encompass extreme work conditions. Setup: Between eight and ten of the eight hundred and fifty physicians and sonographers of the Clalit health services from seven hospitals and eight community centers across Israel, participate in individual Ob/Gyn training sessions each week. These include Ob/Gyn specialists, experts, interns, and sonographers. Innovative teaching and training methodologies: The six-hour training program includes: (1) An educational computer program that challenges trainees to deal with medical questions based upon ultrasound pictures and films. (2) Sophisticated hands-on simulators that challenge the trainees to practice correct grip of the transducer, elucidate pathology, and practice daily tasks such as biometric measurements and analysis of sonographic data. (3) Participation in a video-taped simulation which focuses on physician-patient communications. In the simulation, the physician is required to diagnose the clinical condition of a hired actress based on the data she provides and by evaluating the assigned ultrasound films accordingly. Giving ‘bad news’ to the patient may put the physician in a stressful situation that must be properly managed. (4) Feedback at the end of each phase is provided by a designated trainer, not a physician, who is specially qualified by Ob\Gyn senior specialists. (5) A group exercise in which the trainer presents a medico-legal case in order to encourage the participants to use their own experience and knowledge to conduct a productive ‘brainstorming’ session. Medical cases are presented and analyzed by the participants together with the trainer's feedback. Findings: (1) The training methods and content that Simultech provides allows trainees to review their medical and communications skills. (2) Simultech training sessions expose physicians to both basic and new, up-to-date cases, refreshing and expanding the trainee's knowledge. (3) Practicing on advanced simulators enables trainees to understand the sonographic space and to implement the basic principles of ultrasound. (4) Communications simulations were found to be beneficial for trainees who were unaware of their interpersonal skills. The trainer feedback, supported by the recorded simulation, allows the trainee to draw conclusions about his performance. Conclusion: Simultech was found to contribute to physicians at all levels of clinical expertise who deal with ultrasound. A break in daily routine together with attendance at a neutral educational center can vastly improve performance and outlook.

Keywords: medical training, simulations, ultrasound, Simultech

Procedia PDF Downloads 248
328 Pupils' and Teachers' Perceptions and Experiences of Welsh Language Instruction

Authors: Mirain Rhys, Kevin Smith

Abstract:

In 2017, the Welsh Government introduced an ambitious, new strategy to increase the number of Welsh speakers in Wales to 1 million by 2050. The Welsh education system is a vitally important feature of this strategy. All children attending state schools in Wales learn Welsh as a second language until the age of 16 and are assessed at General Certificate of Secondary Education (GCSE) level. In 2013, a review of Welsh second language instruction in Key Stages 3 and 4 was completed. The report identified considerable gaps in teachers’ preparation and training for teaching Welsh; poor Welsh language ethos at many schools; and a general lack of resources to support the instruction of Welsh. Recommendations were made across a number of dimensions including curriculum content, pedagogical practice, and teacher assessment, training, and resources. With a new national curriculum currently in development, this study builds on this review and provides unprecedented detail into pupils’ and teachers’ perceptions of Welsh language instruction. The current research built on data taken from an existing capacity building research project on Welsh education, the Wales multi-cohort study (WMS). Quantitative data taken from WMS surveys with over 1200 pupils in schools in Wales indicated that Welsh language lessons were the least enjoyable subject among pupils. The current research aimed to unpick pupil experiences in order to add to the policy development context. To achieve this, forty-four pupils and four teachers in three schools from the larger WMS sample participated in focus groups. Participants from years 9, 11 and 13 who had indicated positive, negative and neutral attitudes towards the Welsh language in a previous WMS survey were selected. Questions were based on previous research exploring issues including, but not limited to pedagogy, policy, assessment, engagement and (teacher) training. A thematic analysis of the focus group recordings revealed that the majority of participants held positive views around keeping the language alive but did not want to take on responsibility for its maintenance. These views were almost entirely based on their experiences of learning Welsh at school, especially in relation to their perceived lack of choice and opinions around particular lesson strategies and assessment. Analysis of teacher interviews highlighted a distinct lack of resources (materials and staff alike) compared to modern foreign languages, which had a negative impact on student motivation and attitudes. Both staff and students indicated a need for more practical, oral language instruction which could lead to Welsh being used outside the classroom. The data corroborate many of the review’s previous findings, but what makes this research distinctive is the way in which pupils poignantly address generally misguided aims for Welsh language instruction, poor pedagogical practice and a general disconnect between Welsh instruction and its daily use in their lives. These findings emphasize the complexity of incorporating the educational sector in strategies for Welsh language maintenance and the complications arising from pedagogical training, support, and resources, as well as teacher and pupil perceptions of, and attitudes towards, teaching and learning Welsh.

Keywords: bilingual education, language maintenance, language revitalisation, minority languages, Wales

Procedia PDF Downloads 91
327 The 10,000 Fold Effect of Retrograde Neurotransmission: A New Concept for Cerebral Palsy Revival by the Use of Nitric Oxide Donars

Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta

Abstract:

Background: Nitric Oxide Donars (NODs) (intrathecal sodium nitroprusside (ITSNP) and oral tadalafil 20mg post ITSNP) has been studied in this context in cerebral palsy patients for fast recovery. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a) Retrograde Neurotransmission (acute cases): 1) Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2) Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b) Vasopasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c) Long-Term Potentiation (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Design: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat cerebral palsy cases. Case-control prospective study. Materials and Methods: The experimental population included 82 cerebral palsy patients (10 patients were given control treatments without NOD or with 5% dextrose superfusion, and 72 patients comprised the NOD group). The mean time for superfusion was 5 months post-cerebral palsy. Pre- and post-NOD status was monitored by Gross Motor Function Classification System for Cerebral Palsy (GMFCS), MRI, and TCD studies. Results: After 7 days in the NOD group, the mean change in the GMFCS score was an increase of 1.2 points mean; after 3 months, there was an increase of 3.4 points mean, compared to the control-group increase of 0.1 points at 3 months. MRI and TCD documented the improvements. Conclusions: NOD (ITSNP boosts up the recovery and oral tadalafil maintains the recovery to a well-desired level) acts swiftly in the treatment of CP, acting within 7 days on 5 months post-cerebral palsy either of the three mechanisms.

Keywords: cerebral palsy, intrathecal sodium nitroprusside, oral tadalafil, perforators, vasodilations, retrograde transmission, the 10, 000-fold effect, long-term potantiation

Procedia PDF Downloads 341
326 Microplastics in Fish from Grenada, West Indies: Problems and Opportunities

Authors: Michelle E. Taylor, Clare E. Morrall

Abstract:

Microplastics are small particles produced for industrial purposes or formed by breakdown of anthropogenic debris. Caribbean nations import large quantities of plastic products. The Caribbean region is vulnerable to natural disasters and Climate Change is predicted to bring multiple additional challenges to island nations. Microplastics have been found in an array of marine environments and in a diversity of marine species. Occurrence of microplastic in the intestinal tracts of marine fish is a concern to human and ecosystem health as pollutants and pathogens can associate with plastics. Studies have shown that the incidence of microplastics in marine fish varies with species and location. Prevalence of microplastics (≤ 5 mm) in fish species from Grenadian waters (representing pelagic, semi-pelagic and demersal lifestyles) harvested for human consumption have been investigated via gut analysis. Harvested tissue was digested in 10% KOH and particles retained on a 0.177 mm sieve were examined. Microplastics identified have been classified according to type, colour and size. Over 97% of fish examined thus far (n=34) contained microplastics. Current and future work includes examining the invasive Lionfish (Pterois spp.) for microplastics, investigating marine invertebrate species as well as examining environmental sources of microplastics (i.e. rivers, coastal waters and sand). Owing to concerns of pollutant accumulation on microplastics and potential migration into organismal tissues, we plan to analyse fish tissue for mercury and other persistent pollutants. Despite having ~110,000 inhabitants, the island nation of Grenada imported approximately 33 million plastic bottles in 2013, of which it is estimated less than 5% were recycled. Over 30% of the imported bottles were ‘unmanaged’, and as such are potential litter/marine debris. A revised Litter Abatement Act passed into law in Grenada in 2015, but little enforcement of the law is evident to date. A local Non-governmental organization (NGO) ‘The Grenada Green Group’ (G3) is focused on reducing litter in Grenada through lobbying government to implement the revised act and running sessions in schools, community groups and on local media and social media to raise awareness of the problems associated with plastics. A local private company has indicated willingness to support an Anti-Litter Campaign in 2018 and local awareness of the need for a reduction of single use plastic use and litter seems to be high. The Government of Grenada have called for a Sustainable Waste Management Strategy and a ban on both Styrofoam and plastic grocery bags are among recommendations recently submitted. A Styrofoam ban will be in place at the St. George’s University campus from January 1st, 2018 and many local businesses have already voluntarily moved away from Styrofoam. Our findings underscore the importance of continuing investigations into microplastics in marine life; this will contribute to understanding the associated health risks. Furthermore, our findings support action to mitigate the volume of plastics entering the world’s oceans. We hope that Grenada’s future will involve a lot less plastic. This research was supported by the Caribbean Node of the Global Partnership on Marine Litter.

Keywords: Caribbean, microplastics, pollution, small island developing nation

Procedia PDF Downloads 184
325 Environmental Impacts of Point and Non-Point Source Pollution in Krishnagiri Reservoir: A Case Study in South India

Authors: N. K. Ambujam, V. Sudha

Abstract:

Reservoirs are being contaminated all around the world with point source and Non-Point Source (NPS) pollution. The most common NPS pollutants are sediments and nutrients. Krishnagiri Reservoir (KR) has been chosen for the present case study, which is located in the tropical semi-arid climatic zone of Tamil Nadu, South India. It is the main source of surface water in Krishnagiri district to meet the freshwater demands. The reservoir has lost about 40% of its water holding capacity due to sedimentation over the period of 50 years. Hence, from the research and management perspective, there is a need for a sound knowledge on the spatial and seasonal variations of KR water quality. The present study encompasses the specific objectives as (i) to investigate the longitudinal heterogeneity and seasonal variations of physicochemical parameters, nutrients and biological characteristics of KR water and (ii) to examine the extent of degradation of water quality in KR. 15 sampling points were identified by uniform stratified method and a systematic monthly sampling strategy was selected due to high dynamic nature in its hydrological characteristics. The physicochemical parameters, major ions, nutrients and Chlorophyll a (Chl a) were analysed. Trophic status of KR was classified by using Carlson's Trophic State Index (TSI). All statistical analyses were performed by using Statistical Package for Social Sciences programme, version-16.0. Spatial maps were prepared for Chl a using Arc GIS. Observations in KR pointed out that electrical conductivity and major ions are highly variable factors as it receives inflow from the catchment with different land use activities. The study of major ions in KR exhibited different trends in their values and it could be concluded that as the monsoon progresses the major ions in the water decreases or water quality stabilizes. The inflow point of KR showed comparatively higher concentration of nutrients including nitrate, soluble reactive phosphorus (SRP), total phosphors (TP), total suspended phosphorus (TSP) and total dissolved phosphorus (TDP) during monsoon seasons. This evidently showed the input of significant amount of nutrients from the catchment side through agricultural runoff. High concentration of TDP and TSP at the lacustrine zone of the reservoir during summer season evidently revealed that there was a significant release of phosphorus from the bottom sediments. Carlson’s TSI of KR ranged between 81 and 92 during northeast monsoon and summer seasons. High and permanent Cyanobacterial bloom in KR could be mainly due to the internal loading of phosphorus from the bottom sediments. According to Carlson’s TSI classification Krishnagiri reservoir was ranked in the hyper-eutrophic category. This study provides necessary basic data on the spatio-temporal variations of water quality in KR and also proves the impact of point and NPS pollution from the catchment area. High TSI warrants a greater threat for the recovery of internal P loading and hyper-eutrophic condition of KR. Several expensive internal measures for the reduction of internal loading of P were introduced by many scientists. However, the outcome of the present research suggests for the innovative algae harvesting technique for the removal of sediment nutrients.

Keywords: NPS pollution, nutrients, hyper-eutrophication, krishnagiri reservoir

Procedia PDF Downloads 302
324 Exploring Perspectives and Complexities of E-tutoring: Insights from Students Opting out of Online Tutor Service

Authors: Prince Chukwuneme Enwereji, Annelien Van Rooyen

Abstract:

In recent years, technology integration in education has transformed the learning landscape, particularly in online institutions. One technological advancement that has gained popularity is e-tutoring, which offers personalised academic support to students through online platforms. While e-tutoring has become well-known and has been adopted to promote collaborative learning, there are still students who do not use these services for various reasons. However, little attention has been given to understanding the perspectives of students who have not utilized these services. The research objectives include identifying the perceived benefits that non-e-tutoring students believe e-tutoring could offer, such as enhanced academic support, personalized learning experiences, and improved performance. Additionally, the study explored the potential drawbacks or concerns that non-e-tutoring students associate with e-tutoring, such as concerns about efficacy, a lack of face-to-face interaction, and platform accessibility. The study adopted a quantitative research approach with a descriptive design to gather and analyze data on non-e-tutoring students' perspectives. Online questionnaires were employed as the primary data collection method, allowing for the efficient collection of data from many participants. The collected data was analyzed using the Statistical Package for the Social Sciences (SPSS). Ethical concepts such as informed consent, anonymity of responses and protection of respondents against harm were maintained. Findings indicate that non-e-tutoring students perceive a sense of control over their own pace of learning, suggesting a preference for self-directed learning and the ability to tailor their educational experience to their individual needs and learning styles. They also exhibit high levels of motivation, believe in their ability to effectively participate in their studies and organize their academic work, and feel comfortable studying on their own without the help of e-tutors. However, non-e-tutoring students feel that e-tutors do not sufficiently address their academic needs and lack engagement. They also perceive a lack of clarity in the roles of e-tutors, leading to uncertainty about their responsibilities. In terms of communication, students feel overwhelmed by the volume of announcements and find repetitive information frustrating. Additionally, some students face challenges with their internet connection and associated cost, which can hinder their participation in online activities. Furthermore, non-e-tutoring students express a desire for interactions with their peers and a sense of belonging to a group or team. They value opportunities for collaboration, teamwork in their learning experience, the importance of fostering social interactions and creating a sense of community in online learning environments. This study recommended that students seek alternate support systems by reaching out to professors or academic advisors for guidance and clarification. Developing self-directed learning skills is essential, empowering students to take charge of their own learning through setting objectives, creating own study plans, and utilising resources. For HEIs, it was recommended that they should ensure that a variety of support services are available to cater to the needs of all students, including non-e-tutoring students. HEIs should also ensure easy access to online resources, promote a supportive community, and regularly evaluate and adapt their support techniques to meet students' changing requirements.

Keywords: online-tutor;, student support;, online education, educational practices, distance education

Procedia PDF Downloads 49
323 Fabrication of Antimicrobial Dental Model Using Digital Light Processing (DLP) Integrated with 3D-Bioprinting Technology

Authors: Rana Mohamed, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab

Abstract:

Background: Bio-fabrication is a multidisciplinary research field that combines several principles, fabrication techniques, and protocols from different fields. The open-source-software movement is a movement that supports the use of open-source licenses for some or all software as part of the broader notion of open collaboration. Additive manufacturing is the concept of 3D printing, where it is a manufacturing method through adding layer-by-layer using computer-aided designs (CAD). There are several types of AM system used, and they can be categorized by the type of process used. One of these AM technologies is Digital light processing (DLP) which is a 3D printing technology used to rapidly cure a photopolymer resin to create hard scaffolds. DLP uses a projected light source to cure (Harden or crosslinking) the entire layer at once. Current applications of DLP are focused on dental and medical applications. Other developments have been made in this field, leading to the revolutionary field 3D bioprinting. The open-source movement was started to spread the concept of open-source software to provide software or hardware that is cheaper, reliable, and has better quality. Objective: Modification of desktop 3D printer into 3D bio-printer and the integration of DLP technology and bio-fabrication to produce an antibacterial dental model. Method: Modification of a desktop 3D printer into a 3D bioprinter. Gelatin hydrogel and sodium alginate hydrogel were prepared with different concentrations. Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum were extracted, and extractions were selected on different levels (Powder, aqueous extracts, total oils, and Essential oils) prepared for antibacterial bioactivity. Agar well diffusion method along with the E. coli have been used to perform the sensitivity test for the antibacterial activity of the extracts acquired by Zingiber officinale, Syzygium aromaticum, and Allium sativum. Lastly, DLP printing was performed to produce several dental models with the natural extracted combined with hydrogel to represent and simulate the Hard and Soft tissues. Result: The desktop 3D printer was modified into 3D bioprinter using open-source software Marline and modified custom-made 3D printed parts. Sodium alginate hydrogel and gelatin hydrogel were prepared at 5% (w/v), 10% (w/v), and 15%(w/v). Resin integration with the natural extracts of Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum was done following the percentage 1- 3% for each extract. Finally, the Antimicrobial dental model was printed; exhibits the antimicrobial activity, followed by merging with sodium alginate hydrogel. Conclusion: The open-source movement was successful in modifying and producing a low-cost Desktop 3D Bioprinter showing the potential of further enhancement in such scope. Additionally, the potential of integrating the DLP technology with bioprinting is a promising step toward the usage of the antimicrobial activity using natural products.

Keywords: 3D printing, 3D bio-printing, DLP, hydrogel, antibacterial activity, zingiber officinale, syzygium aromaticum, allium sativum, panax ginseng, dental applications

Procedia PDF Downloads 64
322 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.

Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing

Procedia PDF Downloads 146
321 Expanding Access and Deepening Engagement: Building an Open Source Digital Platform for Restoration-Based Stem Education in the Largest Public-School System in the United States

Authors: Lauren B. Birney

Abstract:

This project focuses upon the expansion of the existing "Curriculum and Community Enterprise for the Restoration of New York Harbor in New York City Public Schools" NSF EHR DRL 1440869, NSF EHR DRL 1839656 and NSF EHR DRL 1759006. This project is recognized locally as “Curriculum and Community Enterprise for Restoration Science,” or CCERS. CCERS is a comprehensive model of ecological restoration-based STEM education for urban public-school students. Following an accelerated rollout, CCERS is now being implemented in 120+ Title 1 funded NYC Department of Education middle schools, led by two cohorts of 250 teachers, serving more than 11,000 students in total. Initial results and baseline data suggest that the CCERS model, with the Billion Oyster Project (BOP) as its local restoration ecology-based STEM curriculum, is having profound impacts on students, teachers, school leaders, and the broader community of CCERS participants and stakeholders. Students and teachers report being receptive to the CCERS model and deeply engaged in the initial phase of curriculum development, citizen science data collection, and student-centered, problem-based STEM learning. The BOP CCERS Digital Platform will serve as the central technology hub for all research, data, data analysis, resources, materials and student data to promote global interactions between communities, Research conducted included qualitative and quantitative data analysis. We continue to work internally on making edits and changes to accommodate a dynamic society. The STEM Collaboratory NYC® at Pace University New York City continues to act as the prime institution for the BOP CCERS project since the project’s inception in 2014. The project continues to strive to provide opportunities in STEM for underrepresented and underserved populations in New York City. The replicable model serves as an opportunity for other entities to create this type of collaboration within their own communities and ignite a community to come together and address the notable issue. Providing opportunities for young students to engage in community initiatives allows for a more cohesive set of stakeholders, ability for young people to network and provide additional resources for those students in need of additional support, resources and structure. The project has planted more than 47 million oysters across 12 acres and 15 reef sites, with the help of more than 8,000 students and 10,000 volunteers. Additional enhancements and features on the BOP CCERS Digital Platform will continue over the next three years through funding provided by the National Science Foundation, NSF DRL EHR 1759006/1839656 Principal Investigator Dr. Lauren Birney, Professor Pace University. Early results from the data indicate that the new version of the Platform is creating traction both nationally and internationally among community stakeholders and constituents. This project continues to focus on new collaborative partners that will support underrepresented students in STEM Education. The advanced Digital Platform will allow for us connect with other countries and networks on a larger Global scale.

Keywords: STEM education, environmental restoration science, technology, citizen science

Procedia PDF Downloads 63
320 Assessing Brain Targeting Efficiency of Ionisable Lipid Nanoparticles Encapsulating Cas9 mRNA/gGFP Following Different Routes of Administration in Mice

Authors: Meiling Yu, Nadia Rouatbi, Khuloud T. Al-Jamal

Abstract:

Background: Treatment of neurological disorders with modern medical and surgical approaches remains difficult. Gene therapy, allowing the delivery of genetic materials that encodes potential therapeutic molecules, represents an attractive option. The treatment of brain diseases with gene therapy requires the gene-editing tool to be delivered efficiently to the central nervous system. In this study, we explored the efficiency of different delivery routes, namely intravenous (i.v.), intra-cranial (i.c.), and intra-nasal (i.n.), to deliver stable nucleic acid-lipid particles (SNALPs) containing gene-editing tools namely Cas9 mRNA and sgRNA encoding for GFP as a reporter protein. We hypothesise that SNALPs can reach the brain and perform gene-editing to different extents depending on the administration route. Intranasal administration (i.n.) offers an attractive and non-invasive way to access the brain circumventing the blood–brain barrier. Successful delivery of gene-editing tools to the brain offers a great opportunity for therapeutic target validation and nucleic acids therapeutics delivery to improve treatment options for a range of neurodegenerative diseases. In this study, we utilised Rosa26-Cas9 knock-in mice, expressing GFP, to study brain distribution and gene-editing efficiency of SNALPs after i.v.; i.c. and i.n. routes of administration. Methods: Single guide RNA (sgRNA) against GFP has been designed and validated by in vitro nuclease assay. SNALPs were formulated and characterised using dynamic light scattering. The encapsulation efficiency of nucleic acids (NA) was measured by RiboGreen™ assay. SNALPs were incubated in serum to assess their ability to protect NA from degradation. Rosa26-Cas9 knock-in mice were i.v., i.n., or i.c. administered with SNALPs to test in vivo gene-editing (GFP knockout) efficiency. SNALPs were given as three doses of 0.64 mg/kg sgGFP following i.v. and i.n. or a single dose of 0.25 mg/kg sgGFP following i.c.. knockout efficiency was assessed after seven days using Sanger Sequencing and Inference of CRISPR Edits (ICE) analysis. In vivo, the biodistribution of DiR labelled SNALPs (SNALPs-DiR) was assessed at 24h post-administration using IVIS Lumina Series III. Results: Serum-stable SNALPs produced were 130-140 nm in diameter with ~90% nucleic acid loading efficiency. SNALPs could reach and stay in the brain for up to 24h following i.v.; i.n. and i.c. administration. Decreasing GFP expression (around 50% after i.v. and i.c. and 20% following i.n.) was confirmed by optical imaging. Despite the small number of mice used, ICE analysis confirmed GFP knockout in mice brains. Additional studies are currently taking place to increase mice numbers. Conclusion: Results confirmed efficient gene knockout achieved by SNALPs in Rosa26-Cas9 knock-in mice expressing GFP following different routes of administrations in the following order i.v.= i.c.> i.n. Each of the administration routes has its pros and cons. The next stages of the project involve assessing gene-editing efficiency in wild-type mice and replacing GFP as a model target with therapeutic target genes implicated in Motor Neuron Disease pathology.

Keywords: CRISPR, nanoparticles, brain diseases, administration routes

Procedia PDF Downloads 64
319 Treatment of Neuronal Defects by Bone Marrow Stem Cells Differentiation to Neuronal Cells Cultured on Gelatin-PLGA Scaffolds Coated with Nano-Particles

Authors: Alireza Shams, Ali Zamanian, Atefehe Shamosi, Farnaz Ghorbani

Abstract:

Introduction: Although the application of a new strategy remains a remarkable challenge for treatment of disabilities due to neuronal defects, progress in Nanomedicine and tissue engineering, suggesting the new medical methods. One of the promising strategies for reconstruction and regeneration of nervous tissue is replacing of lost or damaged cells by specific scaffolds after Compressive, ischemic and traumatic injuries of central nervous system. Furthermore, ultrastructure, composition, and arrangement of tissue scaffolds are effective on cell grafts. We followed implantation and differentiation of mesenchyme stem cells to neural cells on Gelatin Polylactic-co-glycolic acid (PLGA) scaffolds coated with iron nanoparticles. The aim of this study was to evaluate the capability of stem cells to differentiate into motor neuron-like cells under topographical cues and morphogenic factors. Methods and Materials: Bone marrow mesenchymal stem cells (BMMSCs) was obtained by primary cell culturing of adult rat bone marrow got from femur bone by flushing method. BMMSCs were incubated with DMEM/F12 (Gibco), 15% FBS and 100 U/ml pen/strep as media. Then, BMMSCs seeded on Gel/PLGA scaffolds and tissue culture (TCP) polystyrene embedded and incorporated by Fe Nano particles (FeNPs) (Fe3o4 oxide (M w= 270.30 gr/mol.). For neuronal differentiation, 2×10 5 BMMSCs were seeded on Gel/PLGA/FeNPs scaffolds was cultured for 7 days and 0.5 µ mol. Retinoic acid, 100 µ mol. Ascorbic acid,10 ng/ml. Basic fibroblast growth factor (Sigma, USA), 250 μM Iso butyl methyl xanthine, 100 μM 2-mercaptoethanol, and 0.2 % B27 (Invitrogen, USA) added to media. Proliferation of BMMSCs was assessed by using MTT assay for cell survival. The morphology of BMMSCs and scaffolds was investigated by scanning electron microscopy analysis. Expression of neuron-specific markers was studied by immunohistochemistry method. Data were analyzed by analysis of variance, and statistical significance was determined by Turkey’s test. Results: Our results revealed that differentiation and survival of BMMSCs into motor neuron-like cells on Gel/PLGA/FeNPs as a biocompatible and biodegradable scaffolds were better than those cultured in Gel/PLGA in absence of FeNPs and TCP scaffolds. FeNPs had raised physical power but decreased capacity absorption of scaffolds. Well defined oriented pores in scaffolds due to FeNPs may activate differentiation and synchronized cells as a mechanoreceptor. Induction effects of magnetic FeNPs by One way flow of channels in scaffolds help to lead the cells and can facilitate direction of their growth processes. Discussion: Progression of biological properties of BMMSCs and the effects of FeNPs spreading under magnetic field was evaluated in this investigation. In vitro study showed that the Gel/PLGA/FeNPs scaffold provided a suitable structure for motor neuron-like cells differentiation. This could be a promising candidate for enhancing repair and regeneration in neural defects. Dynamic and static magnetic field for inducing and construction of cells can provide better results for further experimental studies.

Keywords: differentiation, mesenchymal stem cells, nano particles, neuronal defects, Scaffolds

Procedia PDF Downloads 143
318 TNF Modulation of Cancer Stem Cells in Renal Clear Cell Carcinoma

Authors: Rafia S. Al-lamki, Jun Wang, Simon Pacey, Jordan Pober, John R. Bradley

Abstract:

Tumor necrosis factor alpha (TNF), signaling through TNFR2, may act an autocrine growth factor for renal tubular epithelial cells. Clear cell renal carcinomas (ccRCC) contain cancer stem cells (CSCs) that give rise to progeny which form the bulk of the tumor. CSCs are rarely in cell cycle and, as non-proliferating cells, resist most chemotherapeutic agents. Thus, recurrence after chemotherapy may result from the survival of CSCs. Therapeutic targeting of both CSCs and the more differentiated bulk tumor populations may provide a more effective strategy for treatment of RCC. In this study, we hypothesized that TNFR2 signaling will induce CSCs in ccRCC to enter cell cycle so that treatment with ligands that engage TNFR2 will render CSCs susceptible to chemotherapy. To test this hypothesis, we have utilized wild-type TNF (wtTNF) or specific muteins selective for TNFR1 (R1TNF) or TNFR2 (R2TNF) to treat either short-term organ cultures of ccRCC and adjacent normal kidney (NK) tissue or cultures of CD133+ cells isolated from ccRCC and adjacent NK, hereafter referred to as stem cell-like cells (SCLCs). The effect of cyclophosphamide (CP), currently an effective anticancer agent, was tested on CD133+SCLCs from ccRCC and NK before and after R2TNF treatment. Responses to TNF were assessed by flow cytometry (FACS), immunofluorescence, and quantitative real-time PCR, TUNEL, and cell viability assays. Cytotoxic effect of CP was analyzed by Annexin V and propidium iodide staining with FACS. In addition, we assessed the effect of TNF on isolated SCLCs differentiation using a three-dimensional (3D) culture system. Clinical samples of ccRCC contain a greater number SCLCs compared to NK and the number of SCSC increases with higher tumor grade. Isolated SCLCs show expression of stemness markers (oct4, Nanog, Sox2, Lin28) but not differentiation markers (cytokeratin, CD31, CD45, and EpCAM). In ccRCC organ cultures, wtTNF and R2TNF increase CD133 and TNFR2 expression and promote cell cycle entry whereas wtTNF and R1TNF increase TNFR1 expression and promote cell death of SCLCs. Similar findings are observed in SCLCs isolated from NK but the effect was greater in SCLCs isolated from ccRCC. Application of CP distinctly triggered apoptotic and necrotic cell death in SLCSs pre-treatment with R2TNF as compared to CP treatment alone, with SCLCs from ccRCC more sensitive to CP compared to SLCS from NK. Furthermore, TNF promotes differentiation of SCLCs to an epithelial phenotype in 3D cultures, confirmed by cytokeratin expression and loss of stemness markers Nanog and Sox2. The differentiated cells show positive expression of TNF and TNFR2. These findings provide evidence that selective engagement of TNFR2 drive CSCs to cell proliferation/differentiation, and targeting of cycling cells with TNFR2 agonist in combination with anti-cancer agents may be a potential therapy for RCC.

Keywords: cancer stem cells, ccRCC, cell cycle, cell death, TNF, TNFR1, TNFR2, CD133

Procedia PDF Downloads 241
317 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 188
316 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities

Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho

Abstract:

In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.

Keywords: on-site stormwater detention, source control, stormwater, urban drainage

Procedia PDF Downloads 158
315 A Case Report: The Role of Gut Directed Hypnotherapy in Resolution of Irritable Bowel Syndrome in a Medication Refractory Pediatric Male Patient

Authors: Alok Bapatla, Pamela Lutting, Mariastella Serrano

Abstract:

Background: Irritable Bowel Syndrome (IBS) is a functional gastrointestinal disorder characterized by abdominal pain associated with altered bowel habits in the absence of an underlying organic cause. Although the exact etiology of IBS is not fully understood, one of the leading theories postulates a pathology within the Brain-Gut Axis that leads to an overall increase in gastrointestinal sensitivity and pejorative changes in gastrointestinal motility. Research and clinical practice have shown that Gut Directed Hypnotherapy (GDH) has a beneficial clinical role in improving Mind-Gut control and thereby comorbid conditions such as anxiety, abdominal pain, constipation, and diarrhea. Aims: This study presents a 17-year old male with underlying anxiety and a one-year history of IBS-Constipation Predominant Subtype (IBS-C), who has demonstrated impressive improvement of symptoms following GDH treatment following refractory trials with medications including bisacodyl, senna, docusate, magnesium citrate, lubiprostone, linaclotide. Method: The patient was referred to a licensed clinical psychologist specializing in clinical hypnosis and cognitive-behavioral therapy (CBT), who implemented “The Standardized Hypnosis Protocol for IBS” developed by Dr. Olafur S. Palsson, Psy.D at the University of North Carolina at Chapel Hill. The hypnotherapy protocol consisted of a total of seven weekly 45-minute sessions supplemented with a 20-minute audio recording to be listened to once daily. Outcome variables included the GAD-7, PHQ-9 and DCI-2, as well as self-ratings (ranging 0-10) for pain (intensity and frequency), emotional distress about IBS symptoms, and overall emotional distress. All variables were measured at intake prior to administration of the hypnosis protocol and at the conclusion of the hypnosis treatment. A retrospective IBS Questionnaire (IBS Severity Scoring System) was also completed at the conclusion of the GDH treatment for pre-and post-test ratings of clinical symptoms. Results: The patient showed improvement in all outcome variables and self-ratings, including abdominal pain intensity, frequency of abdominal pain episodes, emotional distress relating to gut issues, depression, and anxiety. The IBS Questionnaire showed a significant improvement from a severity score of 400 (defined as severe) prior to GDH intervention compared to 55 (defined as complete resolution) at four months after the last session. IBS Questionnaire subset questions that showed a significant score improvement included abdominal pain intensity, days of pain experienced per 10 days, satisfaction with bowel habits, and overall interference of life affected by IBS symptoms. Conclusion: This case supports the existing research literature that GDH has a significantly beneficial role in improving symptoms in patients with IBS. Emphasis is placed on the numerical results of the IBS Questionnaire scoring, which reflects a patient who initially suffered from severe IBS with failed response to multiple medications, who subsequently showed full and sustained resolution

Keywords: pediatrics, constipation, irritable bowel syndrome, hypnotherapy, gut-directed hypnosis

Procedia PDF Downloads 169
314 Co2e Sequestration via High Yield Crops and Methane Capture for ZEV Sustainable Aviation Fuel

Authors: Bill Wason

Abstract:

143 Crude Palm Oil Coop mills on Sumatra Island are participating in a program to transfer land from defaulted estates to small farmers while improving the sustainability of palm production to allow for biofuel & food production. GCarbon will be working with farmers to transfer technology, fertilizer, and trees to double the yield from the current baseline of 3.5 tons to at least 7 tons of oil per ha (25 tons of fruit bunches). This will be measured via evaluation of yield comparisons between participant and non-participant farms. We will also capture methane from Palm Oil Mill Effluent (POME)throughbelt press filtering. Residues will be weighed and a formula used to estimate methane emission reductions based on methodologies developed by other researchers. GCarbon will also cover mill ponds with a non-permeable membrane and collect methane for energy or steam production. A system for accelerating methane production involving ozone and electro-flocculation will be tested to intensifymethane generation and reduce the time for wastewater treatment. A meta-analysis of research on sweet potatoes and sorghum as rotation crops will look at work in the Rio Grande do Sul, Brazil where5 ha. oftest plots of industrial sweet potato have achieved yields of 60 tons and 40 tons per ha. from 2 harvests in one year (100 MT/ha./year). Field trials will be duplicated in Bom Jesus Das Selvas, Maranhaothat will test varieties of sweet potatoes to measure yields and evaluate disease risks in a very different soil and climate of NE Brazil. Hog methane will also be captured. GCarbon Brazil, Coop Sisal, and an Australian research partner will plant several varieties of agave and use agronomic procedures to get yields of 880 MT per ha. over 5 years. They will also plant new varieties expected to get 3500 MT of biomass after 5 years (176-700 MT per ha. per year). The goal is to show that the agave can adapt to Brazil’s climate without disease problems. The study will include a field visit to growing sites in Australia where agave is being grown commercially for biofuels production. Researchers will measure the biomass per hectare at various stages in the growing cycle, sugar content at harvest, and other metrics to confirm the yield of sugar per ha. is up to 10 times greater than sugar cane. The study will look at sequestration rates from measuring soil carbon and root accumulation in various plots in Australia to confirm carbon sequestered from 5 years of production. The agave developer estimates that 60-80 MT of sequestration per ha. per year occurs from agave. The three study efforts in 3 different countries will define a feedstock pathway for jet fuel that involves very high yield crops that can produce 2 to 10 times more biomass than current assumptions. This cost-effective and less land intensive strategy will meet global jet fuel demand and produce huge quantities of food for net zero aviation and feeding 9-10 billion people by 2050

Keywords: zero emission SAF, methane capture, food-fuel integrated refining, new crops for SAF

Procedia PDF Downloads 74
313 Optimized Processing of Neural Sensory Information with Unwanted Artifacts

Authors: John Lachapelle

Abstract:

Introduction: Neural stimulation is increasingly targeted toward treatment of back pain, PTSD, Parkinson’s disease, and for sensory perception. Sensory recording during stimulation is important in order to examine neural response to stimulation. Most neural amplifiers (headstages) focus on noise efficiency factor (NEF). Conversely, neural headstages need to handle artifacts from several sources including power lines, movement (EMG), and neural stimulation itself. In this work a layered approach to artifact rejection is used to reduce corruption of the neural ENG signal by 60dBv, resulting in recovery of sensory signals in rats and primates that would previously not be possible. Methods: The approach combines analog techniques to reduce and handle unwanted signal amplitudes. The methods include optimized (1) sensory electrode placement, (2) amplifier configuration, and (3) artifact blanking when necessary. The techniques together are like concentric moats protecting a castle; only the wanted neural signal can penetrate. There are two conditions in which the headstage operates: unwanted artifact < 50mV, linear operation, and artifact > 50mV, fast-settle gain reduction signal limiting (covered in more detail in a separate paper). Unwanted Signals at the headstage input: Consider: (a) EMG signals are by nature < 10mV. (b) 60 Hz power line signals may be > 50mV with poor electrode cable conditions; with careful routing much of the signal is common to both reference and active electrode and rejected in the differential amplifier with <50mV remaining. (c) An unwanted (to the neural recorder) stimulation signal is attenuated from stimulation to sensory electrode. The voltage seen at the sensory electrode can be modeled Φ_m=I_o/4πσr. For a 1 mA stimulation signal, with 1 cm spacing between electrodes, the signal is <20mV at the headstage. Headstage ASIC design: The front end ASIC design is designed to produce < 1% THD at 50mV input; 50 times higher than typical headstage ASICs, with no increase in noise floor. This requires careful balance of amplifier stages in the headstage ASIC, as well as consideration of the electrodes effect on noise. The ASIC is designed to allow extremely small signal extraction on low impedance (< 10kohm) electrodes with configuration of the headstage ASIC noise floor to < 700nV/rt-Hz. Smaller high impedance electrodes (> 100kohm) are typically located closer to neural sources and transduce higher amplitude signals (> 10uV); the ASIC low-power mode conserves power with 2uV/rt-Hz noise. Findings: The enhanced neural processing ASIC has been compared with a commercial neural recording amplifier IC. Chronically implanted primates at MGH demonstrated the presence of commercial neural amplifier saturation as a result of large environmental artifacts. The enhanced artifact suppression headstage ASIC, in the same setup, was able to recover and process the wanted neural signal separately from the suppressed unwanted artifacts. Separately, the enhanced artifact suppression headstage ASIC was able to separate sensory neural signals from unwanted artifacts in mouse-implanted peripheral intrafascicular electrodes. Conclusion: Optimizing headstage ASICs allow observation of neural signals in the presence of large artifacts that will be present in real-life implanted applications, and are targeted toward human implantation in the DARPA HAPTIX program.

Keywords: ASIC, biosensors, biomedical signal processing, biomedical sensors

Procedia PDF Downloads 299
312 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 103
311 Geochemistry and Tectonic Framework of Malani Igneous Suite and Their Effect on Groundwater Quality of Tosham, India

Authors: Naresh Kumar, Savita Kumari, Naresh Kochhar

Abstract:

The objective of the study was to assess the role of mineralogy and subsurface structure on water quality of Tosham, Malani Igneous Suite (MIS), Western Rajasthan, India. MIS is the largest (55,000 km2) A-type, anorogenic and high heat producing acid magmatism in the peninsular India and owes its origin to hot spot tectonics. Apart from agricultural and industrial wastes, geogenic activities cause fluctuations in quality parameters of water resources. Twenty water samples (20) selected from Tosham and surrounding areas were analyzed for As, Pb, B, Al, Zn, Fe, Ni using Inductive coupled plasma emission and F by Ion Chromatography. The concentration of As, Pb, B, Ni and F was above the stipulated level specified by BIS (Bureau of Indian Standards IS-10500, 2012). The concentration of As and Pb in surrounding areas of Tosham ranged from 1.2 to 4.1 mg/l and from 0.59 to 0.9 mg/l respectively which is higher than limits of 0.05mg/l (As) and 0.01 mg/l (Pb). Excess trace metal accumulation in water is toxic to humans and adversely affects the central nervous system, kidneys, gastrointestinal tract, skin and cause mental confusion. Groundwater quality is defined by nature of rock formation, mineral water reaction, physiography, soils, environment, recharge and discharge conditions of the area. Fluoride content in groundwater is due to the solubility of fluoride-bearing minerals like fluorite, cryolite, topaz, and mica, etc. Tosham is comprised of quartz mica schist, quartzite, schorl, tuff, quartz porphyry and associated granites, thus, fluoride is leached out and dissolved in groundwater. In the study area, Ni concentration ranged from 0.07 to 0.5 mg/l (permissible limit 0.02 mg/l). The primary source of nickel in drinking water is leached out nickel from ore-bearing rocks. Higher concentration of As is found in some igneous rocks specifically containing minerals as arsenopyrite (AsFeS), realgar (AsS) and orpiment (As2S3). MIS consists of granite (hypersolvus and subsolvus), rhyolite, dacite, trachyte, andesite, pyroclasts, basalt, gabbro and dolerite which increased the trace elements concentration in groundwater. Nakora, a part of MIS rocks has high concentration of trace and rare earth elements (Ni, Rb, Pb, Sr, Y, Zr, Th, U, La, Ce, Nd, Eu and Yb) which percolates the Ni and Pb to groundwater by weathering, contacts and joints/fractures in rocks. Additionally, geological setting of MIS also causes dissolution of trace elements in water resources beneath the surface. NE–SW tectonic lineament, radial pattern of dykes and volcanic vent at Nakora created a way for leaching of these elements to groundwater. Rain water quality might be altered by major minerals constituents of host Tosham rocks during its percolation through the rock fracture, joints before becoming the integral part of groundwater aquifer. The weathering process like hydration, hydrolysis and solution might be the cause of change in water chemistry of particular area. These studies suggest that geological relation of soil-water horizon with MIS rocks via mineralogical variations, structures and tectonic setting affects the water quality of the studied area.

Keywords: geochemistry, groundwater, malani igneous suite, tosham

Procedia PDF Downloads 187
310 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering

Authors: Olia Kanevskaia

Abstract:

Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.

Keywords: private order, standardization, standard-setting organizations, transnational law

Procedia PDF Downloads 129
309 Comparing Community Health Agents, Physicians and Nurses in Brazil's Family Health Strategy

Authors: Rahbel Rahman, Rogério Meireles Pinto, Margareth Santos Zanchetta

Abstract:

Background: Existing shortcomings of current health-service delivery include poor teamwork, competencies that do not address consumer needs, and episodic rather than continuous care. Brazil’s Sistema Único de Saúde (Unified Health System, UHS) is acknowledged worldwide as a model for delivering community-based care through Estratégia Saúde da Família (FHS; Family Health Strategy) interdisciplinary teams, comprised of Community Health Agents (in Portuguese, Agentes Comunitário de Saude, ACS), nurses, and physicians. FHS teams are mandated to collectively offer clinical care, disease prevention services, vector control, health surveillance and social services. Our study compares medical providers (nurses and physicians) and community-based providers (ACS) on their perceptions of work environment, professional skills, cognitive capacities and job context. Global health administrators and policy makers can leverage on similarities and differences across care providers to develop interprofessional training for community-based primary care. Methods: Cross-sectional data were collected from 168 ACS, 62 nurses and 32 physicians in Brazil. We compared providers’ demographic characteristics (age, race, and gender) and job context variables (caseload, work experience, work proximity to community, the length of commute, and familiarity with the community). Providers perceptions were compared to their work environment (work conditions and work resources), professional skills (consumer-input, interdisciplinary collaboration, efficacy of FHS teams, work-methods and decision-making autonomy), and cognitive capacities (knowledge and skills, skill variety, confidence and perseverance). Descriptive and bi-variate analysis, such as Pearson Chi-square and Analysis of Variance (ANOVA) F-tests, were performed to draw comparisons across providers. Results: Majority of participants were ACS (64%); 24% nurses; and 12% physicians. Majority of nurses and ACS identified as mixed races (ACS, n=85; nurses, n=27); most physicians identified as males (n=16; 52%), and white (n=18; 58%). Physicians were less likely to incorporate consumer-input and demonstrated greater decision-making autonomy than nurses and ACS. ACS reported the highest levels of knowledge and skills but the least confidence compared to nurses and physicians. ACS, nurses, and physicians were efficacious that FHS teams improved the quality of health in their catchment areas, though nurses tend to disagree that interdisciplinary collaboration facilitated their work. Conclusion: To our knowledge, there has been no study comparing key demographic and cognitive variables across ACS, nurses and physicians in the context of their work environment and professional training. We suggest that global health systems can leverage upon the diverse perspectives of providers to implement a community-based primary care model grounded in interprofessional training. Our study underscores the need for in-service trainings to instill reflective skills of providers, improve communication skills of medical providers and curative skills of ACS. Greater autonomy needs to be extended to community based providers to offer care integral to addressing consumer and community needs.

Keywords: global health systems, interdisciplinary health teams, community health agents, community-based care

Procedia PDF Downloads 209
308 Tertiary Training of Future Health Educators and Health Professionals Involved in Childhood Obesity Prevention and Treatment Strategies

Authors: Thea Werkhoven, Wayne Cotton

Abstract:

Adult and childhood rates of obesity in Australia are health concerns of high national priority, retaining epidemic status in the populations affected. Attempts to prevent further increases in prevalence of childhood obesity in the population aged below eighteen years have had varied success. A multidisciplinary approach has been used, employing strategies in schools, through established health care system usage and public health campaigns. Over the last decade a plateau in prevalence has been reached in the youth population afflicted by obesity and interest has peaked in school based strategies to prevent and treat overweight and obesity. Of interest to this study is the importance of the tertiary training of future health educators or health professionals destined to be involved in obesity prevention and treatment strategies. Health educators and health professionals are considered instrumental to the success of prevention and treatment strategies, required to possess sufficient and accurate knowledge in order to be effective in their positions. A common influence on the success of school based health promoting activities are the weight based attitudes possessed by health educators, known to be negative and biased towards overweight or obese children during training and practice. Whilst the tertiary training of future health professionals includes minimal nutrition education, there is no mandatory training in health education or nutrition for pre-service health educators in Australian tertiary institutions. This study aimed to assess the impact of a pedagogical intervention on pre-service health educators and health professionals enrolled in a health and wellbeing elective. The intervention aimed to increase nutrition knowledge and decrease weight bias and was embedded in the twelve week elective. Participants (n=98) were tertiary students at a major Australian University who were enrolled in health (47%) and non-health related degrees (53%). A quantitative survey using four valid and reliable instruments was conducted to measured nutrition knowledge, antifat attitudes and weight stereotyping attitudes at baseline and post-intervention. Scores on each instrument were compared between time points to check if they had significantly changed and to determine the effect of the intervention on attitudes and knowledge. Antifat attitudes at baseline were considered low and decreased further over the course of the intervention. Scores representing weight bias did decrease but the change was not significant. Fat stereotyping attitudes became stronger over the course of the intervention and this change was significant. Nutrition knowledge significantly improved from baseline to post-intervention. The design of the nutrition knowledge and attitude amelioration content of the intervention was semi-successful in achieving its outcomes. While the level of nutrition knowledge was improved over the course of the intervention, an unintentional increase was observed in weight based prejudice which is known to occur in interventions that employ stigma reduction methodologies. Further research is required into a structured methodology that increases level of nutrition knowledge and ameliorates weight bias at the tertiary level. In this way training provided would help prepare future health educators with the knowledge, skills and attitudes required to be effective and bias free in their practice.

Keywords: education, intervention, nutrition, obesity

Procedia PDF Downloads 173
307 Production Factor Coefficients Transition through the Lens of State Space Model

Authors: Kanokwan Chancharoenchai

Abstract:

Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.

Keywords: autoregressive model, economic growth, state space model, Thailand

Procedia PDF Downloads 125
306 Environmental Impacts Assessment of Power Generation via Biomass Gasification Systems: Life Cycle Analysis (LCA) Approach for Tars Release

Authors: Grâce Chidikofan, François Pinta, A. Benoist, G. Volle, J. Valette

Abstract:

Statement of the Problem: biomass gasification systems may be relevant for decentralized power generation from recoverable agricultural and wood residues available in rural areas. In recent years, many systems have been implemented in all over the world as especially in Cambodgia, India. Although they have many positive effects, these systems can also affect the environment and human health. Indeed, during the process of biomass gasification, black wastewater containing tars are produced and generally discharged in the local environment either into the rivers or on soil. However, in most environmental assessment studies of biomass gasification systems, the impact of these releases are underestimated, due to the difficulty of identification of their chemical substances. This work deal with the analysis of the environmental impacts of tars from wood gasification in terms of human toxicity cancer effect, human toxicity non-cancer effect, and freshwater ecotoxicity. Methodology: A Life Cycle Assessment (LCA) approach was adopted. The inventory of tars chemicals substances was based on experimental data from a downdraft gasification system. The composition of six samples from two batches of raw materials: one batch made of tree wood species (oak+ plane tree +pine) at 25 % moisture content and the second batch made of oak at 11% moisture content. The tests were carried out for different gasifier load rates, respectively in the range 50-75% and 50-100%. To choose the environmental impacts assessment method, we compared the methods available in SIMAPRO tool (8.2.0) which are taking into account most of the chemical substances. The environmental impacts for 1kg of tars discharged were characterized by ILCD 2011+ method (V.1.08). Findings Experimental results revealed 38 important chemical substances in varying proportion from one test to another. Only 30 are characterized by ILCD 2011+ method, which is one of the best performing methods. The results show that wood species or moisture content have no significant impact on human toxicity noncancer effect (HTNCE) and freshwater ecotoxicity (FWE) for water release. For human toxicity cancer effect (HTCE), a small gap is observed between impact factors of the two batches, either 3.08E-7 CTUh/kg against 6.58E-7 CTUh/kg. On the other hand, it was found that the risk of negative effects is higher in case of tar release into water than on soil for all impact categories. Indeed, considering the set of samples, the average impact factor obtained for HTNCE varies respectively from 1.64 E-7 to 1.60E-8 CTUh/kg. For HTCE, the impact factor varies between 4.83E-07 CTUh/kg and 2.43E-08 CTUh/kg. The variability of those impact factors is relatively low for these two impact categories. Concerning FWE, the variability of impact factor is very high. It is 1.3E+03 CTUe/kg for tars release into water against 2.01E+01 CTUe/kg for tars release on soil. Statement concluding: The results of this study show that the environmental impacts of tars emission of biomass gasification systems can be consequent and it is important to investigate the ways to reduce them. For environmental research, these results represent an important step of a global environmental assessment of the studied systems. It could be used to better manage the wastewater containing tars to reduce as possible the impacts of numerous still running systems all over the world.

Keywords: biomass gasification, life cycle analysis, LCA, environmental impact, tars

Procedia PDF Downloads 252