Search results for: quantitative microbial risk assessment tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17302

Search results for: quantitative microbial risk assessment tools

1012 An Integrative Review on the Experiences of Integration of Quality Assurance Systems in Universities

Authors: Laura Mion

Abstract:

Concepts of quality assurance and management are now part of the organizational culture of the Universities. Quality Assurance (QA) systems are, in large part, provided for by national regulatory dictates or supranational indications (such as, for example, at European level are, the ESG Guidelines "European Standard Guidelines"), but their specific definition, in terms of guiding principles, requirements and methodologies, are often delegated to the national evaluation agencies or to the autonomy of individual universities. For this reason, the experiences of implementation of QA systems in different countries and in different universities is an interesting source of information to understand how quality in universities is understood, pursued and verified. The literature often deals with the treatment of the experiences of implementation of QA systems in the individual areas in which the University's activity is carried out - teaching, research, third mission - but only rarely considers quality systems with a systemic and integrated approach, which allows to correlate subjects, actions, and performance in a virtuous circuit of continuous improvement. In particular, it is interesting to understand how to relate the results and uses of the QA in the triple distinction of university activities, identifying how one can cause the performance of the other as a function of an integrated whole and not as an exploit of specific activities or processes conceived in an abstractly atomistic way. The aim of the research is, therefore, to investigate which experiences of "integrated" QA systems are present on the international scene: starting from the experience of European countries that have long shared the Bologna Process for the creation of a European space for Higher Education (EHEA), but also considering experiences from emerging countries that use QA processes to develop their higher education systems to keep them up to date with international levels. The concept of "integration", in this research, is understood in a double meaning: i) between the different areas of activity, in particular between the didactic and research areas, and possibly with the so-called "third mission" "ii) the functional integration between those involved in quality assessment and management and the governance of the University. The paper will present the results of a systematic review conducted according with a method of an integrative review aimed at identifying best practices of quality assurance systems, in individual countries or individual universities, with a high level of integration. The analysis of the material thus obtained has made it possible to grasp common and transversal elements of QA system integration practices or particularly interesting elements and strengths of these experiences that can, therefore, be considered as winning aspects in a QA practice. The paper will present the method of analysis carried out, and the characteristics of the experiences identified, of which the structural elements will be highlighted (level of integration, areas considered, organizational levels included, etc.) and the elements for which these experiences can be considered as best practices.

Keywords: quality assurance, university, integration, country

Procedia PDF Downloads 81
1011 The Real Ambassador: How Hip Hop Culture Connects and Educates across Borders

Authors: Frederick Gooding

Abstract:

This paper explores how many Hip Hop artists have intentionally and strategically invoked sustainability principles of people, planet and profits as a means to create community, compensate for and cope with structural inequalities in society. These themes not only create community within one's country, but the powerful display and demonstration of these narratives create community on a global plane. Listeners of Hip Hop are therefore able to learn about the political events occurring in another country free of censure, and establish solidarity worldwide. Hip Hop therefore can be an ingenious tool to create self-worth, recycle positive imagery, and serve as a defense mechanism from institutional and structural forces that conspire to make an upward economic and social trajectory difficult, if not impossible for many people of color, all across the world. Although the birthplace of Hip Hop, the United States of America, is still predominately White, it has undoubtedly grown more diverse at a breath-­taking pace in recent decades. Yet, whether American mainstream media will fully reflect America’s newfound diversity remains to be seen. As it stands, American mainstream media is seen and enjoyed by diverse audiences not just in America, but all over the world. Thus, it is imperative that further inquiry is conducted about one of the fastest growing genres within one of the world’s largest and most influential media industries generating upwards of $10 billion annually. More importantly, hip hop, its music and associated culture collectively represent a shared social experience of significant value. They are important tools used both to inform and influence economic, social and political identity. Conversely, principles of American exceptionalism often prioritize American political issues over those of others, thereby rendering a myopic political view within the mainstream. This paper will therefore engage in an international contextualization of the global phenomena entitled Hip Hop by exploring the creative genius and marketing appeal of Hip Hop within the global context of information technology, political expression and social change in addition to taking a critical look at historically racialized imagery within mainstream media. Many artists the world over have been able to freely express themselves and connect with broader communities outside of their own borders, all through the sound practice of the craft of Hip Hop. An empirical understanding of political, social and economic forces within the United States will serve as a bridge for identifying and analyzing transnational themes of commonality for typically marginalized or disaffected communities facing similar struggles for survival and respect. The sharing of commonalities of marginalized cultures not only serves as a source of education outside of typically myopic, mainstream sources, but it also creates transnational bonds globally to the extent that practicing artists resonate with many of the original themes of (now mostly underground) Hip Hop as with many of the African American artists responsible for creating and fostering Hip Hop's powerful outlet of expression. Hip Hop's power of connectivity and culture-sharing transnationally across borders provides a key source of education to be taken seriously by academics.

Keywords: culture, education, global, hip hop, mainstream music, transnational

Procedia PDF Downloads 99
1010 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 34
1009 Micro-Oculi Facades as a Sustainable Urban Facade

Authors: Ok-Kyun Im, Kyoung Hee Kim

Abstract:

We live in an era that faces global challenges of climate changes and resource depletion. With the rapid urbanization and growing energy consumption in the built environment, building facades become ever more important in architectural practice and environmental stewardship. Furthermore, building facade undergoes complex dynamics of social, cultural, environmental and technological changes. Kinetic facades have drawn attention of architects, designers, and engineers in the field of adaptable, responsive and interactive architecture since 1980’s. Materials and building technologies have gradually evolved to address the technical implications of kinetic facades. The kinetic façade is becoming an independent system of the building, transforming the design methodology to sustainable building solutions. Accordingly, there is a need for a new design methodology to guide the design of a kinetic façade and evaluate its sustainable performance. The research objectives are two-fold: First, to establish a new design methodology for kinetic facades and second, to develop a micro-oculi façade system and assess its performance using the established design method. The design approach to the micro-oculi facade is comprised of 1) façade geometry optimization and 2) dynamic building energy simulation. The façade geometry optimization utilizes multi-objective optimization process, aiming to balance the quantitative and qualitative performances to address the sustainability of the built environment. The dynamic building energy simulation was carried out using EnergyPlus and Radiance simulation engines with scripted interfaces. The micro-oculi office was compared with an office tower with a glass façade in accordance with ASHRAE 90.1 2013 to understand its energy efficiency. The micro-oculi facade is constructed with an array of circular frames attached to a pair of micro-shades called a micro-oculus. The micro-oculi are encapsulated between two glass panes to protect kinetic mechanisms with longevity. The micro-oculus incorporates rotating gears that transmit the power to adjacent micro-oculi to minimize the number of mechanical parts. The micro-oculus rotates around its center axis with a step size of 15deg depending on the sun’s position while maximizing daylighting potentials and view-outs. A 2 ft by 2ft prototyping was undertaken to identify operational challenges and material implications of the micro-oculi facade. In this research, a systematic design methodology was proposed, that integrates multi-objectives of kinetic façade design criteria and whole building energy performance simulation within a holistic design process. This design methodology is expected to encourage multidisciplinary collaborations between designers and engineers to collaborate issues of the energy efficiency, daylighting performance and user experience during design phases. The preliminary energy simulation indicated that compared to a glass façade, the micro-oculi façade showed energy savings due to its improved thermal properties, daylighting attributes, and dynamic solar performance across the day and seasons. It is expected that the micro oculi façade provides a cost-effective, environmentally-friendly, sustainable, and aesthetically pleasing alternative to glass facades. Recommendations for future studies include lab testing to validate the simulated data of energy and optical properties of the micro-oculi façade. A 1:1 performance mock-up of the micro-oculi façade can suggest in-depth understanding of long-term operability and new development opportunities applicable for urban façade applications.

Keywords: energy efficiency, kinetic facades, sustainable architecture, urban facades

Procedia PDF Downloads 254
1008 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine

Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland

Abstract:

The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.

Keywords: force-velocity, leg-press, power-velocity, profiling, reliability

Procedia PDF Downloads 55
1007 The Socio-Demographics of HIV-Infected Persons with Psychological Morbidity in Zaria, Nigeria

Authors: Obiageli Helen Ezeh, Chuks Clement Ezeh

Abstract:

Background: It is estimated that more than 330 million persons are living with HIV-infection globally and in Nigeria about 3.4 persons are living with the infection, with an annual death rate of 180,000. Psychological morbidity often accompany chronic illnesses and may be associated with substance abuse, poor health seeking behavior and adherence to treatment program; it may worsen existing health problems and the overall quality of life. Until the burden is effectively identified, intervention cannot be planned. Until there is a cure, the goal is to manage and cope effectively with HIV-infection. Little if any studies have been done in this area in the North West geo-political zone of Nigeria. The study would help to identify high risk groups and prevent the progression and spread of the infection. Aim: To identify HIV-infected persons with psychological morbidity, accessing HIV- clinic at Shika Hospital, Zaria, Kaduna State; and analyze their socio-demographic profile. Methods: A cross sectional descriptive study was carried out to assess and analyze the socio-demographic characteristics of HIV-infected persons attending Shika hospital Zaria Nigeria, who screened positive for psychological morbidity. A total of 109 HIV-infected persons receiving HAART at Shika clinic, Zaria, Kaduna State, Nigeria, were administered questionnaires, the General Health Questionnaire (GHQ-12)measuring psychological morbidity and socio-demographic data. The participants ranged in age between 18 and 75 years. Results: Data were analyzed using SPSS software 15. Both descriptive and inferential Statistics were performed on the data. Results indicate a total prevalent rate of psychological morbidity of 78 percent among participants. Of this, about 16.2 percent were severely distressed, 25.1 percent moderately distressed and 36.7percent were mildly distressed. More females (65 percent of those with psychological morbidity) were found to be distressed than their male (55 percent) counterparts. It was (44 percent) for patients whose HIV-infection was of relatively shorter duration(2-4 years) than those of longer duration(5-9 years; and 10 years/above). The age group (21-30 years) was the most affected (35 percent). The rate was also 55 percent for Christians and 45 percent for Muslims. For married patients with partners it was 20 percent and for singles 30 percent; for the widowed (12 percent) and divorced (38 percent). At the level of tribal/ethnic groups, it was 13 percent for Ibos, 22 percent for Yorubas, 27 percent for Hausas and 33 percent for all the other minority tribes put together. Conclusion/Recommendation: The study has been able to identify the presence of psychological morbidity among HIV-infected persons as high and analyze the socio-demographic factors associated with it as significant. Periodic screening of HIV-infected persons for psychological morbidity and psychosocial intervention was recommended.

Keywords: socio-demographics, psychological morbidities, HIV-Infection, HAART

Procedia PDF Downloads 252
1006 The Effect of Paper Based Concept Mapping on Students' Academic Achievement and Attitude in Science Education

Authors: Orhan Akınoğlu, Arif Çömek, Ersin Elmacı, Tuğba Gündoğdu

Abstract:

The concept map is known to be a powerful tool to organize the ideas and concepts of an individuals’ mind. This tool is a kind of visual map that illustrates the relationships between the concepts of a certain subject. The effect of concept mapping on cognitive and affective qualities is one of the research topics among educational researchers for last decades. We educators want to utilize it both as an instructional tool or an assessment tool in classes. For that reason, this study aimed to determine the effect of concept mapping as a learning strategy in science classes on students’ academic achievement and attitude. The research employed a randomized pre-test post-test control group design. Data collected from 60 sixth grade students participated in the study from a randomly selected primary school in Turkey. Sixth-grade classes of the school were analyzed according to students’ academic achievement, science attitude, gender, mathematics, science courses grades, and their GPAs before the implementation. Two of the classes found to be equivalent (t=0,983, p>0,05) and one of them was defined as experimental and the other one control group randomly. During a 5-weeks period, the experimental group students (N=30) used the paper-based concept mapping method while the control group students (N=30) were taught with the traditional approach according to the science and technology education curriculum for light and sound subject. Both groups were taught by the same teacher who is experienced using concept mapping in science classes. Before the implementation, the teacher explained the theory of the concept maps and showed how to create paper-based concept mapping individually to the experimental group students for two hours. Then for two following hours she asked them to create some concept maps related to their former science subjects and gave them feedback by reviewing their concept maps to be sure that they can create during the implementation. The data were collected by science achievement test, science attitude scale and personal information form. Science achievement test and science attitude scale were implemented as pre-test and post-test while personal information form was implemented just as once. The reliability coefficient of the achievement test was KR20=0,76 and Cronbach’s Alpha of the attitude scale was 0,89. SPSS statistical software was used to analyze the data. According to the results, there was a statistically significant difference between the experimental and control group for academic achievement but not for attitude. The experimental group had significantly greater gains from academic achievement test than the control group (t=0,02, p<0,05). The findings showed that the paper-and-pencil concept mapping can be used as an effective method for students’ academic achievement in science classes. The results have implications for further researches.

Keywords: concept mapping, science education, constructivism, academic achievement, science attitude

Procedia PDF Downloads 404
1005 Assessment the Implications of Regional Transport and Local Emission Sources for Mitigating Particulate Matter in Thailand

Authors: Ruchirek Ratchaburi, W. Kevin. Hicks, Christopher S. Malley, Lisa D. Emberson

Abstract:

Air pollution problems in Thailand have improved over the last few decades, but in some areas, concentrations of coarse particulate matter (PM₁₀) are above health and regulatory guidelines. It is, therefore, useful to investigate how PM₁₀ varies across Thailand, what conditions cause this variation, and how could PM₁₀ concentrations be reduced. This research uses data collected by the Thailand Pollution Control Department (PCD) from 17 monitoring sites, located across 12 provinces, and obtained between 2011 and 2015 to assess PM₁₀ concentrations and the conditions that lead to different levels of pollution. This is achieved through exploration of air mass pathways using trajectory analysis, used in conjunction with the monitoring data, to understand the contribution of different months, an hour of the day and source regions to annual PM₁₀ concentrations in Thailand. A focus is placed on locations that exceed the national standard for the protection of human health. The analysis shows how this approach can be used to explore the influence of biomass burning on annual average PM₁₀ concentration and the difference in air pollution conditions between Northern and Southern Thailand. The results demonstrate the substantial contribution that open biomass burning from agriculture and forest fires in Thailand and neighboring countries make annual average PM₁₀ concentrations. The analysis of PM₁₀ measurements at monitoring sites in Northern Thailand show that in general, high concentrations tend to occur in March and that these particularly high monthly concentrations make a substantial contribution to the overall annual average concentration. In 2011, a > 75% reduction in the extent of biomass burning in Northern Thailand and in neighboring countries resulted in a substantial reduction not only in the magnitude and frequency of peak PM₁₀ concentrations but also in annual average PM₁₀ concentrations at sites across Northern Thailand. In Southern Thailand, the annual average PM₁₀ concentrations for individual years between 2011 and 2015 did not exceed the human health standard at any site. The highest peak concentrations in Southern Thailand were much lower than for Northern Thailand for all sites. The peak concentrations at sites in Southern Thailand generally occurred between June and October and were associated with air mass back trajectories that spent a substantial proportion of time over the sea, Indonesia, Malaysia, and Thailand prior to arrival at the monitoring sites. The results show that emissions reductions from biomass burning and forest fires require action on national and international scales, in both Thailand and neighboring countries, such action could contribute to ensuring compliance with Thailand air quality standards.

Keywords: annual average concentration, long-range transport, open biomass burning, particulate matter

Procedia PDF Downloads 180
1004 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 213
1003 Improving Screening and Treatment of Binge Eating Disorders in Pediatric Weight Management Clinic through a Quality Improvement Framework

Authors: Cristina Fernandez, Felix Amparano, John Tumberger, Stephani Stancil, Sarah Hampl, Brooke Sweeney, Amy R. Beck, Helena H Laroche, Jared Tucker, Eileen Chaves, Sara Gould, Matthew Lindquist, Lora Edwards, Renee Arensberg, Meredith Dreyer, Jazmine Cedeno, Alleen Cummins, Jennifer Lisondra, Katie Cox, Kelsey Dean, Rachel Perera, Nicholas A. Clark

Abstract:

Background: Adolescents with obesity are at higher risk of disordered eating than the general population. Detection of eating disorders (ED) is difficult. Screening questionnaires may aid in early detection of ED. Our team’s prior efforts focused on increasing ED screening rates to ≥90% using a validated 10-question adolescent binge eating disorder screening questionnaire (ADO-BED). This aim was achieved. We then aimed to improve treatment plan initiation of patients ≥12 years of age who screen positive for BED within our WMC from 33% to 70% within 12 months. Methods: Our WMC is within a tertiary-care, free-standing children’s hospital. A3, an improvement framework, was used. A multidisciplinary team (physicians, nurses, registered dietitians, psychologists, and exercise physiologists) was created. The outcome measure was documentation of treatment plan initiation of those who screen positive (goal 70%). The process measure was ADO-BED screening rate of WMC patients (goal ≥90%). Plan-Do-Study-Act (PDSA) cycle 1 included provider education on current literature and treatment plan initiation based upon ADO-BED responses. PDSA 2 involved increasing documentation of treatment plan and retrain process to providers. Pre-defined treatment plans were: 1) repeat screen in 3-6 months, 2) resources provided only, or 3) comprehensive multidisciplinary weight management team evaluation. Run charts monitored impact over time. Results: Within 9 months, 166 patients were seen in WMC. Process measure showed sustained performance above goal (mean 98%). Outcome measure showed special cause improvement from mean of 33% to 100% (n=31). Of treatment plans provided, 45% received Plan 1, 4% Plan 2, and 46% Plan 3. Conclusion: Through a multidisciplinary improvement team approach, we maintained sustained ADO-BED screening performance, and, prior to our 12-month timeline, achieved our project aim. Our efforts may serve as a model for other multidisciplinary WMCs. Next steps may include expanding project scope to other WM programs.

Keywords: obesity, pediatrics, clinic, eating disorder

Procedia PDF Downloads 57
1002 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis

Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon

Abstract:

Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.

Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage

Procedia PDF Downloads 159
1001 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India

Authors: Alok Saini, Rajkumar Ghosh

Abstract:

Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.

Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level

Procedia PDF Downloads 89
1000 Analyzing the Impact of Bariatric Surgery in Obesity Associated Chronic Kidney Disease: A 2-Year Observational Study

Authors: Daniela Magalhaes, Jorge Pedro, Pedro Souteiro, Joao S. Neves, Sofia Castro-Oliveira, Vanessa Guerreiro, Rita Bettencourt- Silva, Maria M. Costa, Ana Varela, Joana Queiros, Paula Freitas, Davide Carvalho

Abstract:

Introduction: Obesity is an independent risk factor for renal dysfunction. Our aims were: (1) evaluate the impact of bariatric surgery (BS) on renal function; (2) clarify the factors determining the postoperative evolution of the glomerular filtration rate (GFR); (3) access the occurrence of oxalate-mediated renal complications. Methods: We investigated a cohort of 1448 obese patients who underwent bariatric surgery. Those with basal GFR (GFR0) < 30mL/min or without information about the GFR 2-year post-surgery (GFR2) were excluded. Results: We included 725 patients, of whom 647 (89.2%) women, with 41 (IQR 34-51) years, a median weight of 112.4 (IQR 103.0-125.0) kg and a median BMI of 43.4 (IQR 40.6-46.9) kg/m2. Of these, 459 (63.3%) performed gastric bypass (RYGB), 144 (19.9%) placed an adjustable gastric band (AGB) and 122 (16.8%) underwent vertical gastrectomy (VG). At 2-year post-surgery, excess weight loss (EWL) was 60.1 (IQR 43.7-72.4) %. There was a significant improve of metabolic and inflammatory status, as well as a significant decrease in the proportion of patients with diabetes, arterial hypertension and dyslipidemia (p < 0.0001). At baseline, 38 (5.2%) of subjects had hyperfiltration with a GFR0 ≥ 125mL/min/1.73m2, 492 (67.9%) had a GFR0 90-124 mL/min/1.73m2, 178 (24.6%) had a GFR0 60-89 mL/min/1.73m2, and 17 (2.3%) had a GFR0 < 60 mL/min/1.73m2. GFR decreased in 63.2% of patients with hyperfiltration (ΔGFR=-2.5±7.6), and increased in 96.6% (ΔGFR=22.2±12.0) and 82.4% (ΔGFR=24.3±30.0) of the subjects with GFR0 60-89 and < 60 mL/min/1.73m2, respectively ( p < 0.0001). This trend was maintained when adjustment was made for the type of surgery performed. Of 321 patients, 10 (3.3%) had a urinary albumin excretion (UAE) > 300 mg/dL (A3), 44 (14.6%) had a UAE 30-300 mg/dL (A2) and 247 (82.1%) has a UAE < 30 mg/dL (A1). Albuminuria decreased after surgery and at 2-year follow-up only 1 (0.3%) patient had A3, 17 (5.6%) had A2 and 283 (94%) had A1 (p < 0,0001). In multivariate analysis, the variables independently associated with ΔGFR were BMI (positively) and fasting plasma glucose (negatively). During the 2-year follow-up, only 57 of the 725 patients had transient urinary excretion of calcium oxalate crystals. None has records of oxalate-mediated renal complications at our center. Conclusions: The evolution of GFR after BS seems to depend on the initial renal function, as it decreases in subjects with hyperfiltration, but tends to increase in those with renal dysfunction. Our results suggest that BS is associated with improvement of renal outcomes, without significant increase of renal complications. So, apart the clear benefits in metabolic and inflammatory status, maybe obese adults with nondialysis-dependent CKD should be referred for bariatric surgery evaluation.

Keywords: albuminuria, bariatric surgery, glomerular filtration rate, renal function

Procedia PDF Downloads 355
999 Monitoring of Serological Test of Blood Serum in Indicator Groups of the Population of Central Kazakhstan

Authors: Praskovya Britskaya, Fatima Shaizadina, Alua Omarova, Nessipkul Alysheva

Abstract:

Planned preventive vaccination, which is carried out in the Republic of Kazakhstan, promoted permanent decrease in the incidence of measles and viral hepatitis B. In the structure of VHB patients prevail people of young, working age. Monitoring of infectious incidence, monitoring of coverage of immunization of the population, random serological control over the immunity enable well-timed identification of distribution of the activator, effectiveness of the taken measures and forecasting. The serological blood analysis was conducted in indicator groups of the population of Central Kazakhstan for the purpose of identification of antibody titre for vaccine preventable infections (measles, viral hepatitis B). Measles antibodies were defined by method of enzyme-linked assay (ELA) with test-systems "VektoKor" – Ig G ('Vektor-Best' JSC). Antibodies for HBs-antigen of hepatitis B virus in blood serum was identified by method of enzyme-linked assay (ELA) with VektoHBsAg test systems – antibodies ('Vektor-Best' JSC). The result of the analysis is positive, the concentration of IgG to measles virus in the studied sample is equal to 0.18 IU/ml or more. Protective level of concentration of anti-HBsAg makes 10 mIU/ml. The results of the study of postvaccinal measles immunity showed that the share of seropositive people made 87.7% of total number of surveyed. The level of postvaccinal immunity to measles in age groups differs. So, among people older than 56 the percentage of seropositive made 95.2%. Among people aged 15-25 were registered 87.0% seropositive, at the age of 36-45 – 86.6%. In age groups of 25-35 and 36-45 the share of seropositive people was approximately at the same level – 88.5% and 88.8% respectively. The share of people seronegative to a measles virus made 12.3%. The biggest share of seronegative people was found among people aged 36-45 – 13.4% and 15-25 – 13.0%. The analysis of results of the examined people for the existence of postvaccinal immunity to viral hepatitis B showed that from all surveyed only 33.5% have the protective level of concentration of anti-HBsAg of 10 mIU/ml and more. The biggest share of people protected from VHB virus is observed in the age group of 36-45 and makes 60%. In the indicator group – above 56 – seropositive people made 4.8%. The high percentage of seronegative people has been observed in all studied age groups from 40.0% to 95.2%. The group of people which is least protected from getting VHB is people above 56 (95.2%). The probability to get VHB is also high among young people aged 25-35, the percentage of seronegative people made 80%. Thus, the results of the conducted research testify to the need for carrying out serological monitoring of postvaccinal immunity for the purpose of operational assessment of the epidemiological situation, early identification of its changes and prediction of the approaching danger.

Keywords: antibodies, blood serum, immunity, immunoglobulin

Procedia PDF Downloads 252
998 Development of Perovskite Quantum Dots Light Emitting Diode by Dual-Source Evaporation

Authors: Antoine Dumont, Weiji Hong, Zheng-Hong Lu

Abstract:

Light emitting diodes (LEDs) are steadily becoming the new standard for luminescent display devices because of their energy efficiency and relatively low cost, and the purity of the light they emit. Our research focuses on the optical properties of the lead halide perovskite CsPbBr₃ and its family that is showing steadily improving performances in LEDs and solar cells. The objective of this work is to investigate CsPbBr₃ as an emitting layer made by physical vapor deposition instead of the usual solution-processed perovskites, for use in LEDs. The deposition in vacuum eliminates any risk of contaminants as well as the necessity for the use of chemical ligands in the synthesis of quantum dots. Initial results show the versatility of the dual-source evaporation method, which allowed us to create different phases in bulk form by altering the mole ratio or deposition rate of CsBr and PbBr₂. The distinct phases Cs₄PbBr₆, CsPbBr₃ and CsPb₂Br₅ – confirmed through XPS (x-ray photoelectron spectroscopy) and X-ray diffraction analysis – have different optical properties and morphologies that can be used for specific applications in optoelectronics. We are particularly focused on the blue shift expected from quantum dots (QDs) and the stability of the perovskite in this form. We already obtained proof of the formation of QDs through our dual source evaporation method with electron microscope imaging and photoluminescence testing, which we understand is a first in the community. We also incorporated the QDs in an LED structure to test the electroluminescence and the effect on performance and have already observed a significant wavelength shift. The goal is to reach 480nm after shifting from the original 528nm bulk emission. The hole transport layer (HTL) material onto which the CsPbBr₃ is evaporated is a critical part of this study as the surface energy interaction dictates the behaviour of the QD growth. A thorough study to determine the optimal HTL is in progress. A strong blue shift for a typically green emitting material like CsPbBr₃ would eliminate the necessity of using blue emitting Cl-based perovskite compounds and could prove to be more stable in a QD structure. The final aim is to make a perovskite QD LED with strong blue luminescence, fabricated through a dual-source evaporation technique that could be scalable to industry level, making this device a viable and cost-effective alternative to current commercial LEDs.

Keywords: material physics, perovskite, light emitting diode, quantum dots, high vacuum deposition, thin film processing

Procedia PDF Downloads 159
997 Additional Pathological Findings Using MRI on Patients with First Time Traumatic Lateral Patella Dislocation: A Study of 150 Patients

Authors: Ophir Segal, Daniel Weltsch, Shay Tenenbaum, Ran Thein

Abstract:

Purpose: Patients with lateral patellar dislocation (LPD) are not always referred to perform an MRI. This might be the case in first time LPD patients without surgical indications or in patients with recurrent LPD who had MRI in previous episodes. Unfortunately, in some cases, there are additional knee pathological findings, which include tearing of the collateral or cruciate ligaments and injury to the tendons or menisci. These findings might be overlooked during the physical examination or masked by nonspecific clinical findings like knee pain, effusion, or hemarthrosis. The prevalence of these findings, which can be revealed by MRI, is misreported in literature and is considered rare. In our practice, all patients with LPD are sent for MRI after LPD. Therefore, we have designed a retrospective comparative study to evaluate the prevalence of additional pathological findings in patients with acute traumatic LPD that had performed MRI, comparing different groups of patients according to age, sex, and Tibial Tuberosity-Trochlear Groove(TT-TG) distance. Methods: MRI of the knee in patients after traumatic LPD were evaluated for the presence of additional pathological findings such as injuries to ligaments: Anterior/Posterior cruciate ligament(ACL, PCL), Medial/Lateral collateral ligament(MCL, LCL), injuries to tendons(QUADICEPS, PATELLAR), menisci(Medial/Lateral meniscus(MM, LM)) and tibial plateau, by a fellowship-trained, senior musculoskeletal radiologist. A comparison between different groups of patients was performed according to age (age group < 25 years, age group > 25 years), sex (Male/Female group), and TT-TG distance (TT-TG<15 groups, TT-TG>15 groups). A descriptive and comparative statistical analysis was performed. Results: 150 patients were included in this study. All suffered from LPD between the years 2012-2017 (mean age 21.3( ± SD 8.9), 86 males). ACL, PCL, MCL, and LCL complete or partial tears were found in 17(11.3%), 3(2%), 22(14.6%), and 4(2.7%) of the patients, respectively. MM and LM tears were found in 10(6.7%) and 3(2%) of the patients, respectively. A higher prevalence of PCL injury, MM tear, and LM tear were found in the older age group compared to the younger group of patients (10.5% vs. 1.8%, 18.4% vs. 2.7%, and 7.9% vs. 0%, respectively, p<0.05). A higher prevalence of non-displaced MM tear and LCL injury was found in the male group compared to the female group (8.1% vs. 0% and 8.1% vs. 0% respectively, p<0.05). A higher prevalence of ACL injury was found in the normal TT-TG group compared to the pathologic TT-TG group (17.5% vs. 2.3%, p= 0.0184). Conclusions: Overall, 43 out of 150 (28.7%) of the patient's MRI’s were positive for additional pathological radiological findings. Interestingly, a higher prevalence of additional pathologies was found in the groups of patients with a lower risk for recurrent LPD, including males, patients older than 25, and patients with TT-TG lower than 15mm, and therefore might not be referred for an MRI scan. Thus, we recommend a strict physical examination, awareness to the high prevalence of additional pathological findings, and to consider performing an MRI in all patients after LPD.

Keywords: additional findings, lateral patellar dislocation (LPD), MRI scan, traumatic patellar dislocation, cruciate ligaments injuries, menisci injuries, collateral ligaments injuries

Procedia PDF Downloads 140
996 Mechanical Properties of Poly(Propylene)-Based Graphene Nanocomposites

Authors: Luiza Melo De Lima, Tito Trindade, Jose M. Oliveira

Abstract:

The development of thermoplastic-based graphene nanocomposites has been of great interest not only to the scientific community but also to different industrial sectors. Due to the possible improvement of performance and weight reduction, thermoplastic nanocomposites are a great promise as a new class of materials. These nanocomposites are of relevance for the automotive industry, namely because the emission limits of CO2 emissions imposed by the European Commission (EC) regulations can be fulfilled without compromising the car’s performance but by reducing its weight. Thermoplastic polymers have some advantages over thermosetting polymers such as higher productivity, lower density, and recyclability. In the automotive industry, for example, poly(propylene) (PP) is a common thermoplastic polymer, which represents more than half of the polymeric raw material used in automotive parts. Graphene-based materials (GBM) are potential nanofillers that can improve the properties of polymer matrices at very low loading. In comparison to other composites, such as fiber-based composites, weight reduction can positively affect their processing and future applications. However, the properties and performance of GBM/polymer nanocomposites depend on the type of GBM and polymer matrix, the degree of dispersion, and especially the type of interactions between the fillers and the polymer matrix. In order to take advantage of the superior mechanical strength of GBM, strong interfacial strength between GBM and the polymer matrix is required for efficient stress transfer from GBM to the polymer. Thus, chemical compatibilizers and physicochemical modifications have been reported as important tools during the processing of these nanocomposites. In this study, PP-based nanocomposites were obtained by a simple melt blending technique, using a Brabender type mixer machine. Graphene nanoplatelets (GnPs) were applied as structural reinforcement. Two compatibilizers were used to improve the interaction between PP matrix and GnPs: PP graft maleic anhydride (PPgMA) and PPgMA modified with tertiary amine alcohol (PPgDM). The samples for tensile and Charpy impact tests were obtained by injection molding. The results suggested the GnPs presence can increase the mechanical strength of the polymer. However, it was verified that the GnPs presence can promote a decrease of impact resistance, turning the nanocomposites more fragile than neat PP. The compatibilizers’ incorporation increases the impact resistance, suggesting that the compatibilizers can enhance the adhesion between PP and GnPs. Compared to neat PP, Young’s modulus of non-compatibilized nanocomposite increase demonstrated that GnPs incorporation can promote a stiffness improvement of the polymer. This trend can be related to the several physical crosslinking points between the PP matrix and the GnPs. Furthermore, the decrease of strain at a yield of PP/GnPs, together with the enhancement of Young’s modulus, confirms that the GnPs incorporation led to an increase in stiffness but to a decrease in toughness. Moreover, the results demonstrated that incorporation of compatibilizers did not affect Young’s modulus and strain at yield results compared to non-compatibilized nanocomposite. The incorporation of these compatibilizers showed an improvement of nanocomposites’ mechanical properties compared both to those the non-compatibilized nanocomposite and to a PP sample used as reference.

Keywords: graphene nanoplatelets, mechanical properties, melt blending processing, poly(propylene)-based nanocomposites

Procedia PDF Downloads 184
995 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades

Authors: Chi Zhang, Hua-Peng Chen

Abstract:

With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.

Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.

Procedia PDF Downloads 410
994 Public Procurement Development Stages in Georgia

Authors: Giorgi Gaprindashvili

Abstract:

One of the best examples, in evolution of the public procurement, from post-soviet countries are reforms carried out in Georgia, which brought them close to international standards of procurement. In Georgia, public procurement legislation started functioning in 1998. The reform has passed several stages and came in the form as it is today. It should also be noted, that countries with economy in transition, including Georgia, implemented all the reforms in public procurement based on recommendations and support of World Bank, the United Nations and other international organizations. The first law on public procurement in Georgia was adopted on December 9, 1998 which aimed regulation of the procurement process of budget-organizations, transparent and competitive environment for private companies to access state funds legally. The priorities were identified quite clearly in the wording of the law, but operation/function of this law could not be reached on its level, because of some objective and subjective reasons. The high level of corruption in all levels of governance, can be considered as a main obstacle reason and of course, it is natural, that it had direct impact on the procurement process, as well as on transparency and rational use of state funds. This circumstances were the reasons that reforms in this sphere continued, to improve procurement process, in particular, the first wave of reforms began in 2001. Public procurement agency carried out reform with World Bank with main purpose of smartening the procurement legislation and its harmonization with international treaties and agreements. Also with the support of World Bank various activities were carried out to raise awareness of participants involved in procurement system. Further major changes in the legislation were filed in May 2005, which was also directed towards the improvement and smarten of the procurement process. The third wave of the reform began in 2010, which more or less guaranteed the transparency of the procurement process, which later became the basis for the rational spending of state funds. The reform of the procurement system completely changed the procedures. Carried out reform in Georgia resulted in introducing new electronic tendering system, which benefit the transparency of the process, after this became the basis for the further development of a competitive environment, which become a prerequisite for the state rational spending. Increased number of supplier organizations participating in the procurement process resulted in reduction of the estimated cost and the actual cost from 20% up to 40%, it is quite large saving for the procuring organizations and allows them to use the freed-up funds for their other needs. Assessment of the reforms in Georgia in the field of public procurement can be concluded, that proper regulation of the sector and relevant policy may proceed to rational and transparent spending of the budget from country’s state institutions. Also, the business sector has the opportunity to work in competitive market conditions and to make a preliminary analysis, which is a prerequisite for future strategy and development.

Keywords: public administration, public procurement, reforms, transparency

Procedia PDF Downloads 362
993 Geographic Origin Determination of Greek Rice (Oryza Sativa L.) Using Stable Isotopic Ratio Analysis

Authors: Anna-Akrivi Thomatou, Anastasios Zotos, Eleni C. Mazarakioti, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas

Abstract:

It is well known that accurate determination of geographic origin to confront mislabeling and adulteration of foods is considered as a critical issue worldwide not only for the consumers, but also for producers and industries. Among agricultural products, rice (Oryza sativa L.) is the world’s third largest crop, providing food for more than half of the world’s population. Consequently, the quality and safety of rice products play an important role in people’s life and health. Despite the fact that rice is predominantly produced in Asian countries, rice cultivation in Greece is of significant importance, contributing to national agricultural sector income. More than 25,000 acres are cultivated in Greece, while rice exports to other countries consist the 0,5% of the global rice trade. Although several techniques are available in order to provide information about the geographical origin of rice, little data exist regarding the ability of these methodologies to discriminate rice production from Greece. Thus, the aim of this study is the comparative evaluation of stable isotope ratio methodology regarding its discriminative ability for geographical origin determination of rice samples produced in Greece compared to those from three other Asian countries namely Korea, China and Philippines. In total eighty (80) samples were collected from selected fields of Central Macedonia (Greece), during October of 2021. The light element (C, N, S) isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS) and the results obtained were analyzed using chemometric techniques, including principal components analysis (PCA). Results indicated that the 𝜹 15N and 𝜹 34S values of rice produced in Greece were more markedly influenced by geographical origin compared to the 𝜹 13C. In particular, 𝜹 34S values in rice originating from Greece was -1.98 ± 1.71 compared to 2.10 ± 1.87, 4.41 ± 0.88 and 9.02 ± 0.75 for Korea, China and Philippines respectively. Among stable isotope ratios studied, values of 𝜹 34S seem to be the more appropriate isotope marker to discriminate rice geographic origin between the studied areas. These results imply the significant capability of stable isotope ratio methodology for effective geographical origin discrimination of rice, providing a valuable insight into the control of improper or fraudulent labeling. Acknowledgement: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.

Keywords: geographical origin, authenticity, rice, isotope ratio mass spectrometry

Procedia PDF Downloads 83
992 Immunomodulatory Role of Heat Killed Mycobacterium indicus pranii against Cervical Cancer

Authors: Priyanka Bhowmik, Subrata Majumdar, Debprasad Chattopadhyay

Abstract:

Background: Cervical cancer is the third major cause of cancer in women and the second most frequent cause of cancer related deaths causing 300,000 deaths annually worldwide. Evasion of immune response by Human Papilloma Virus (HPV), the key contributing factor behind cancer and pre-cancerous lesions of the uterine cervix, makes immunotherapy a necessity to treat this disease. Objective: A Heat killed fraction of Mycobacterium indicus pranii (MIP), a non-pathogenic Mycobacterium has been shown to exhibit cytotoxic effects on different cancer cells, including human cervical carcinoma cell line HeLa. However, the underlying mechanisms remain unknown. The aim of this study is to decipher the mechanism of MIP induced HeLa cell death. Methods: The cytotoxicity of Mycobacterium indicus pranii against HeLa cells was evaluated by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. Apoptosis was detected by annexin V and Propidium iodide (PI) staining. The assessment of reactive oxygen species (ROS) generation and cell cycle analysis were measured by flow cytometry. The expression of apoptosis associated genes was analyzed by real time PCR. Result: MIP could inhibit the proliferation of HeLa cell in a time and dose dependent manner but caused minor damage to normal cells. The induction of apoptosis was confirmed by the cell surface presentation of phosphatidyl serine, DNA fragmentation, and mitochondrial damage. MIP caused very early (as early as 30 minutes) transcriptional activation of p53, followed by a higher activation (32 fold) at 24 hours suggesting prime importance of p53 in MIP-induced apoptosis in HeLa cell. The up regulation of p53 dependent pro-apoptotic genes Bax, Bak, PUMA, and Noxa followed a lag phase that was required for the transcriptional p53 program. MIP also caused the transcriptional up regulation of Toll like receptor 2 and 4 after 30 minutes of MIP treatment suggesting recognition of MIP by toll like receptors. Moreover, MIP caused the inhibition of expression of HPV anti apoptotic gene E6, which is known to interfere with p53/PUMA/Bax apoptotic cascade. This inhibition might have played a role in transcriptional up regulation of PUMA and subsequently apoptosis. ROS was generated transiently which was concomitant with the highest transcription activation of p53 suggesting a plausible feedback loop network of p53 and ROS in the apoptosis of HeLa cells. Scavenger of ROS, such as N-acetyl-L-cysteine, decreased apoptosis suggesting ROS is an important effector of MIP induced apoptosis. Conclusion: Taken together, MIP possesses full potential to be a novel therapeutic agent in the clinical treatment of cervical cancer.

Keywords: cancer, mycobacterium, immunity, immunotherapy.

Procedia PDF Downloads 245
991 Production of Recombinant Human Serum Albumin in Escherichia coli: A Crucial Biomolecule for Biotechnological and Healthcare Applications

Authors: Ashima Sharma, Tapan K. Chaudhuri

Abstract:

Human Serum Albumin (HSA) is one of the most demanded therapeutic protein with immense biotechnological applications. The current source of HSA is human blood plasma. Blood is a limited and an unsafe source as it possesses the risk of contamination by various blood derived pathogens. This issue led to exploitation of various hosts with the aim to obtain an alternative source for the production of the rHSA. But, till now no host has been proven to be effective commercially for rHSA production because of their respective limitations. Thus, there exists an indispensable need to promote non-animal derived rHSA production. Of all the host systems, Escherichia coli is one of the most convenient hosts which has contributed in the production of more than 30% of the FDA approved recombinant pharmaceuticals. E. coli grows rapidly and its culture reaches high cell density using inexpensive and simple substrates. The fermentation batch turnaround number for E. coli culture is 300 per year, which is far greater than any of the host systems available. Therefore, E. coli derived recombinant products have more economical potential as fermentation processes are cheaper compared to the other expression hosts available. Despite of all the mentioned advantages, E. coli had not been successfully adopted as a host for rHSA production. The major bottleneck in exploiting E. coli as a host for rHSA production was aggregation i.e. majority of the expressed recombinant protein was forming inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA form inclusion body is not preferred because it is tedious, time consuming, laborious and expensive. Because of this limitation, E. coli host system was neglected for rHSA production for last few decades. Considering the advantages of E. coli as a host, the present work has targeted E. coli as an alternate host for rHSA production through resolving the major issue of inclusion body formation associated with it. In the present study, we have developed a novel and innovative method for enhanced soluble and functional production of rHSA in E.coli (~60% of the total expressed rHSA in the soluble fraction) through modulation of the cellular growth, folding and environmental parameters, thereby leading to significantly improved and enhanced -expression levels as well as the functional and soluble proportion of the total expressed rHSA in the cytosolic fraction of the host. Therefore, in the present case we have filled in the gap in the literature, by exploiting the most well studied host system Escherichia coli which is of low cost, fast growing, scalable and ‘yet neglected’, for the enhancement of functional production of HSA- one of the most crucial biomolecule for clinical and biotechnological applications.

Keywords: enhanced functional production of rHSA in E. coli, recombinant human serum albumin, recombinant protein expression, recombinant protein processing

Procedia PDF Downloads 343
990 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring

Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng

Abstract:

Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.

Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring

Procedia PDF Downloads 235
989 Currently Use Pesticides: Fate, Availability, and Effects in Soils

Authors: Lucie Bielská, Lucia Škulcová, Martina Hvězdová, Jakub Hofman, Zdeněk Šimek

Abstract:

The currently used pesticides represent a broad group of chemicals with various physicochemical and environmental properties which input has reached 2×106 tons/year and is expected to even increases. From that amount, only 1% directly interacts with the target organism while the rest represents a potential risk to the environment and human health. Despite being authorized and approved for field applications, the effects of pesticides in the environment can differ from the model scenarios due to the various pesticide-soil interactions and resulting modified fate and behavior. As such, a direct monitoring of pesticide residues and evaluation of their impact on soil biota, aquatic environment, food contamination, and human health should be performed to prevent environmental and economic damages. The present project focuses on fluvisols as they are intensively used in the agriculture but face to several environmental stressors. Fluvisols develop in the vicinity of rivers by the periodic settling of alluvial sediments and periodic interruptions to pedogenesis by flooding. As a result, fluvisols exhibit very high yields per area unit, are intensively used and loaded by pesticides. Regarding the floods, their regular contacts with surface water arise from serious concerns about the surface water contamination. In order to monitor pesticide residues and assess their environmental and biological impact within this project, 70 fluvisols were sampled over the Czech Republic and analyzed for the total and bioaccessible amounts of 40 various pesticides. For that purpose, methodologies for the pesticide extraction and analysis with liquid chromatography-mass spectrometry technique were developed and optimized. To assess the biological risks, both the earthworm bioaccumulation tests and various types of passive sampling techniques (XAD resin, Chemcatcher, and silicon rubber) were optimized and applied. These data on chemical analysis and bioavailability were combined with the results of soil analysis, including the measurement of basic physicochemical soil properties as well detailed characterization of soil organic matter with the advanced method of diffuse reflectance infrared spectrometry. The results provide unique data on the residual levels of pesticides in the Czech Republic and on the factors responsible for increased pesticide residue levels that should be included in the modeling of pesticide fate and effects.

Keywords: currently used pesticides, fluvisoils, bioavailability, Quechers, liquid-chromatography-mass spectrometry, soil properties, DRIFT analysis, pesticides

Procedia PDF Downloads 461
988 A Delphi Study to Build Consensus for Tuberculosis Control Guideline to Achieve Who End Tb 2035 Strategy

Authors: Pui Hong Chung, Cyrus Leung, Jun Li, Kin On Kwok, Ek Yeoh

Abstract:

Introduction: Studies for TB control in intermediate tuberculosis burden countries (IBCs) comprise a relatively small proportion in TB control literature, as compared to the effort put in high and low burden counterparts. It currently lacks of consensus in the optimal weapons and strategies we can use to combat TB in IBCs; guidelines of TB control are inadequate and thus posing a great obstacle in eliminating TB in these countries. To fill-in the research and services gap, we need to summarize the findings of the effort in this regard and to seek consensus in terms of policy making for TB control, we have devised a series of scoping and Delphi studies for these purposes. Method: The scoping and Delphi studies are conducted in parallel to feed information for each other. Before the Delphi iterations, we have invited three local experts in TB control in Hong Kong to participate in the pre-assessment round of the Delphi study to comments on the validity, relevance, and clarity of the Delphi questionnaire. Result: Two scoping studies, regarding LTBI control in health care workers in IBCs and TB control in elderly of IBCs respectively, have been conducted. The result of these two studies is used as the foundation for developing the Delphi questionnaire, which tapped on seven areas of question, namely: characteristics of IBCs, adequacy of research and services in LTBI control in IBCs, importance and feasibility of interventions for TB control and prevention in hospital, screening and treatment of LTBI in community, reasons of refusal to/ default from LTBI treatment, medical adherence of LTBI treatment, and importance and feasibility of interventions for TB control and prevention in elderly in IBCs. The local experts also commented on the two scoping studies conducted, thus act as the sixth phase of expert consultation in Arksey and O’Malley framework of scoping studies, to either nourish the scope and strategies used in these studies or to supplement ideas for further scoping or systematic review studies. In the subsequent stage, an international expert panel, comprised of 15 to 20 experts from IBCs in Western Pacific Region, will be recruited to join the two-round anonymous Delphi iterations. Four categories of TB control experts, namely clinicians, policy makers, microbiologists/ laboratory personnel, and public health clinicians will be our target groups. A consensus level of 80% is used to determine the achievement of consensus on particular issues. Key messages: 1. Scoping review and Delphi method are useful to identify gaps and then achieve consensus in research. 2. Lots of resources are put in the high burden countries now. However, the usually neglected intermediate-burden countries with TB is an indispensable part for achieving the ambitious WHO End TB 2035 target.

Keywords: dephi questionnaire, tuberculosis, WHO, latent TB infection

Procedia PDF Downloads 296
987 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 402
986 Acoustic Radiation Force Impulse Elastography of the Hepatic Tissue of Canine Brachycephalic Patients

Authors: A. C. Facin, M. C. Maronezi , M. P. Menezes, G. L. Montanhim, L. Pavan, M. A. R. Feliciano, R. P. Nociti, R. A. R. Uscategui, P. C. Moraes

Abstract:

The incidence of brachycephalic syndrome (BS) in the clinical routine of small animals has increased significantly giving the higher proportion of brachycephalic pets in the last years and has been considered as an animal welfare problem. The treatment of BS is surgical and the clinical signs related can be considerably attenuated. Nevertheless, the systemic effects of the BS are still poorly reported and little is known about these when the surgical correction is not performed early. Affected dogs are more likely to develop cardiopulmonary, gastrointestinal and sleep disorders in which the chronic hypoxemia plays a major role. This syndrome is compared with the obstructive sleep apnea (OSA) in humans, both considered as causes of systemic and metabolic dysfunction. Among the several consequences of the BS little is known if the syndrome also affects the hepatic tissue of brachycephalic patients. Elastography is a promising ultrasound technique that evaluates tissue elasticity and has been recently used with the purpose of diagnosis of liver fibrosis. In medicine, it is a growing concern regarding the hepatic injury of patients affected by OSA. This prospective study hypothesizes if there is any consequence of BS in the hepatic parenchyma of brachycephalic dogs that don’t receive any surgical treatment. This study was conducted following the approval of the Animal Ethics and Welfare Committee of the Faculdade de Ciências Agrárias e Veterinárias, UNESP, Campus Jaboticabal, Brazil (protocol no 17944/2017) and funded by Sao Paulo Research Foundation (FAPESP, process no 2017/24809-4). The methodology was based in ARFI elastography using the ACUSON S2000/SIEMENS device, with convex multifrequential transducer and specific software as well as clinical evaluation of the syndrome, in order to determine if they can be used as a prognostic non-invasive tool. On quantitative elastography, it was collected three measures of shear wave velocity (meters per second) and depth in centimeters in the left lateral, left medial, right lateral, right medial and caudate lobe of the liver. The brachycephalic patients, 16 pugs and 30 french bulldogs, were classified using a previously established 4-point functional grading system based on clinical evaluation before and after a 3-minute exercise tolerance test already established and validated. The control group was based on the same features collected in 22 beagles. The software R version 3.3.0 was used for the analysis and the significance level was set at 0.05. The data were analysed for normality of residuals and homogeneity of variances by Shapiro-Wilks test. Comparisons of parametric continuous variables between breeds were performed by using ANOVA with a post hoc test for pair wise comparison. The preliminary results show significant statistic differences between the brachycephalic groups and the control group in all lobes analysed (p ≤ 0,05), with higher values of shear wave velocities in the hepatic tissue of brachycephalic dogs. In this context, the results obtained in this study contributes to the understanding of BS as well as its consequences in our patients, reflecting in evidence that one more systemic consequence of the syndrome may occur in brachycephalic patients, which was not related in the veterinary literature yet.

Keywords: airway obstruction, brachycephalic airway obstructive syndrome, hepatic injury, obstructive sleep apnea

Procedia PDF Downloads 115
985 The Negative Effects of Controlled Motivation on Mathematics Achievement

Authors: John E. Boberg, Steven J. Bourgeois

Abstract:

The decline in student engagement and motivation through the middle years is well documented and clearly associated with a decline in mathematics achievement that persists through high school. To combat this trend and, very often, to meet high-stakes accountability standards, a growing number of parents, teachers, and schools have implemented various methods to incentivize learning. However, according to Self-Determination Theory, forms of incentivized learning such as public praise, tangible rewards, or threats of punishment tend to undermine intrinsic motivation and learning. By focusing on external forms of motivation that thwart autonomy in children, adults also potentially threaten relatedness measures such as trust and emotional engagement. Furthermore, these controlling motivational techniques tend to promote shallow forms of cognitive engagement at the expense of more effective deep processing strategies. Therefore, any short-term gains in apparent engagement or test scores are overshadowed by long-term diminished motivation, resulting in inauthentic approaches to learning and lower achievement. The current study focuses on the relationships between student trust, engagement, and motivation during these crucial years as students transition from elementary to middle school. In order to test the effects of controlled motivational techniques on achievement in mathematics, this quantitative study was conducted on a convenience sample of 22 elementary and middle schools from a single public charter school district in the south-central United States. The study employed multi-source data from students (N = 1,054), parents (N = 7,166), and teachers (N = 356), along with student achievement data and contextual campus variables. Cross-sectional questionnaires were used to measure the students’ self-regulated learning, emotional and cognitive engagement, and trust in teachers. Parents responded to a single item on incentivizing the academic performance of their child, and teachers responded to a series of questions about their acceptance of various incentive strategies. Structural equation modeling (SEM) was used to evaluate model fit and analyze the direct and indirect effects of the predictor variables on achievement. Although a student’s trust in teacher positively predicted both emotional and cognitive engagement, none of these three predictors accounted for any variance in achievement in mathematics. The parents’ use of incentives, on the other hand, predicted a student’s perception of his or her controlled motivation, and these two variables had significant negative effects on achievement. While controlled motivation had the greatest effects on achievement, parental incentives demonstrated both direct and indirect effects on achievement through the students’ self-reported controlled motivation. Comparing upper elementary student data with middle-school student data revealed that controlling forms of motivation may be taking their toll on student trust and engagement over time. While parental incentives positively predicted both cognitive and emotional engagement in the younger sub-group, such forms of controlling motivation negatively predicted both trust in teachers and emotional engagement in the middle-school sub-group. These findings support the claims, posited by Self-Determination Theory, about the dangers of incentivizing learning. Short-term gains belie the underlying damage to motivational processes that lead to decreased intrinsic motivation and achievement. Such practices also appear to thwart basic human needs such as relatedness.

Keywords: controlled motivation, student engagement, incentivized learning, mathematics achievement, self-determination theory, student trust

Procedia PDF Downloads 215
984 Exploring Digital Media’s Impact on Sports Sponsorship: A Global Perspective

Authors: Sylvia Chan-Olmsted, Lisa-Charlotte Wolter

Abstract:

With the continuous proliferation of media platforms, there have been tremendous changes in media consumption behaviors. From the perspective of sports sponsorship, while there is now a multitude of platforms to create brand associations, the changing media landscape and shift of message control also mean that sports sponsors will have to take into account the nature of and consumer responses toward these emerging digital media to devise effective marketing strategies. Utilizing the personal interview methodology, this study is qualitative and exploratory in nature. A total of 18 experts from European and American academics, sports marketing industry, and sports leagues/teams were interviewed to address three main research questions: 1) What are the major changes in digital technologies that are relevant to sports sponsorship; 2) How have digital media influenced the channels and platforms of sports sponsorship; and 3) How have these technologies affected the goals, strategies, and measurement of sports sponsorship. The study found that sports sponsorship has moved from consumer engagement, engagement measurement, and consequences of engagement on brand behaviors to micro-targeting one on one, engagement by context, time, and space, and activation and leveraging based on tracking and databases. From the perspective of platforms and channels, the use of mobile devices is prominent during sports content consumption. Increasing multiscreen media consumption means that sports sponsors need to optimize their investment decisions in leagues, teams, or game-related content sources, as they need to go where the fans are most engaged in. The study observed an imbalanced strategic leveraging of technology and digital infrastructure. While sports leagues have had less emphasis on brand value management via technology, sports sponsors have been much more active in utilizing technologies like mobile/LBS tools, big data/user info, real-time marketing and programmatic, and social media activation. Regardless of the new media/platforms, the study found that integration and contextualization are the two essential means of improving sports sponsorship effectiveness through technology. That is, how sponsors effectively integrate social media/mobile/second screen into their existing legacy media sponsorship plan so technology works for the experience/message instead of distracting fans. Additionally, technological advancement and attention economy amplify the importance of consumer data gathering, but sports consumer data does not mean loyalty or engagement. This study also affirms the benefit of digital media as they offer viral and pre-event activations through storytelling way before the actual event, which is critical for leveraging brand association before and after. That is, sponsors now have multiple opportunities and platforms to tell stories about their brands for longer time period. In summary, digital media facilitate fan experience, access to the brand message, multiplatform/channel presentations, storytelling, and content sharing. Nevertheless, rather than focusing on technology and media, today’s sponsors need to define what they want to focus on in terms of content themes that connect with their brands and then identify the channels/platforms. The big challenge for sponsors is to play to the venues/media’s specificity and its fit with the target audience and not uniformly deliver the same message in the same format on different platforms/channels.

Keywords: digital media, mobile media, social media, technology, sports sponsorship

Procedia PDF Downloads 293
983 Biorefinery as Extension to Sugar Mills: Sustainability and Social Upliftment in the Green Economy

Authors: Asfaw Gezae Daful, Mohsen Alimandagari, Kathleen Haigh, Somayeh Farzad, Eugene Van Rensburg, Johann F. Görgens

Abstract:

The sugar industry has to 're-invent' itself to ensure long-term economic survival and opportunities for job creation and enhanced community-level impacts, given increasing pressure from fluctuating and low global sugar prices, increasing energy prices and sustainability demands. We propose biorefineries for re-vitalisation of the sugar industry using low value lignocellulosic biomass (sugarcane bagasse, leaves, and tops) annexed to existing sugar mills, producing a spectrum of high value platform chemicals along with biofuel, bioenergy, and electricity. Opportunity is presented for greener products, to mitigate climate change and overcome economic challenges. Xylose from labile hemicellulose remains largely underutilized and the conversion to value-add products a major challenge. Insight is required on pretreatment and/or extraction to optimize production of cellulosic ethanol together with lactic acid, furfural or biopolymers from sugarcane bagasse, leaves, and tops. Experimental conditions for alkaline and pressurized hot water extraction dilute acid and steam explosion pretreatment of sugarcane bagasse and harvest residues were investigated to serve as a basis for developing various process scenarios under a sugarcane biorefinery scheme. Dilute acid and steam explosion pretreatment were optimized for maximum hemicellulose recovery, combined sugar yield and solids digestibility. An optimal range of conditions for alkaline and liquid hot water extraction of hemicellulosic biopolymers, as well as conditions for acceptable enzymatic digestibility of the solid residue, after such extraction was established. Using data from the above, a series of energy efficient biorefinery scenarios are under development and modeled using Aspen Plus® software, to simulate potential factories to better understand the biorefinery processes and estimate the CAPEX and OPEX, environmental impacts, and overall viability. Rigorous and detailed sustainability assessment methodology was formulated to address all pillars of sustainability. This work is ongoing and to date, models have been developed for some of the processes which can ultimately be combined into biorefinery scenarios. This will allow systematic comparison of a series of biorefinery scenarios to assess the potential to reduce negative impacts on and maximize the benefits of social, economic, and environmental factors on a lifecycle basis.

Keywords: biomass, biorefinery, green economy, sustainability

Procedia PDF Downloads 509