Search results for: performance measurement systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21436

Search results for: performance measurement systems

376 Spinetoram10% WG+Sulfoxaflor 30% WG: A Promising Green Chemistry to Manage Pest Complex in Bt Cotton

Authors: Siddharudha B. Patil

Abstract:

Cotton is a premier commercial fibre crop of India subjected to ravages of insect pests. Sucking pests viz thrips, Thrips tabaci,(lind) leaf hopper Amrsca devastance,(dist) miridbug, Poppiocapsidea beseratense (Dist) and bollworms continue to inflict damage Bt Cotton right from seeding stage. Their infestation impact cotton yield to an extent of 30-40 percent. Chemical control is still adoptable as one of the techniques for combating these pests. Presently, growers have many challenges in selecting effective chemicals which fit in with an integrated pest management. Spinetoram has broad spectrum with excellent insecticidal activity against both sucking pests and bollworms. Hence, it is expected to make a great contribution to stable production and quality improvement of agricultural products. Spinetoram is a derivative of biologically active substances (Spinosyns) produced by soil actinomycetes, Saccharopolypara spinosa which is semi synthetic active ingredient representing Spinosyn chemical class of insecticide and has demonstrated higher level of efficacy with reduced risk on beneficial arthropods. The efforts were made in the present study to test the efficacy of Spinetoram against sucking pests and bollworms in comparison with other insecticides in Bt Cotton under field condition. Field experiment was laid out during 2013-14 and 2014-15 at Agricultural Research station Dharwad (Karnataka-India) in a randomized block design comprising eight treatments and three replications. Bt cotton genotype, Bunny BG-II was sown in a plot size of 5.4 m x5.4 m. Recommend agronomical practices were followed. The Spinetoram 12% SC alone and incombination with sulfaxaflore with varied dosages against pest complex was tested. Performance was compared with Spinosad 45% SC and thiamethoxam 25% WG. The results of consecutive seasons revealed that nonsignificant difference in thrips and leafhopper population and varied significantly after 3 days of imposition. Among the treatments, combiproduct, Spinetoram 10%WG + Sulfoxaflor 30% WG@ 140 gai/ha registered lowest population of thrips (3.91/3 leaves) and leaf hoppers (1.08/3 leaves) followed by its lower dosages viz 120 gai/ha (4.86/3 leaves and 1.14/3 leaves of thrips and leaf hoppers, respectively) and 100 gai/ha (6.02 and 1.23./3 leaves of thrips and leaf hoppers respectively) being at par, significantly superior to rest of the treatments. On the contrary, the population of thrips, leaf hopper and miridbugs in untreated control was on higher side. Similarly the higher dosage of Spinetoram 10% WG+ Sulfoxaflor 30% WG (140 gai/ha) proved its bioefficacy by registering lowest miridbug incidence of 1.70/25 squares, followed by its lower dosage (1.78 and 1.83/25 squares respectively) Further observation made on bollworms incidence revealed that the higher dosage of Spinetoram 10% WG+Sulfoxaflor 30% WG (140 gai/ha) registered lowest percentage of boll damage (7.22%), more number of good opened bolls (36.89/plant) and higher seed cotton yield (19.45q/ha) followed by rest of its lower dosages, Spinetoram 12% SC alone and Spinosad 45% SC being at par significantly superior to rest of the treatments. However, significantly higher boll damage (15.13%) and lower seed cotton yield (14.45 q/ha) was registered in untreated control. Thus Spinetoram10% WG+Sulfoxaflor 30% WG can be a promising option for pest management in Bt Cotton.

Keywords: Spinetoram10% WG+Sulfoxaflor 30% WG, sucking pests, bollworms, Bt cotton, management

Procedia PDF Downloads 223
375 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain

Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha

Abstract:

Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.

Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited

Procedia PDF Downloads 278
374 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 84
373 Assessment of Energy Efficiency and Life Cycle Greenhouse Gas Emission of Wheat Production on Conservation Agriculture to Achieve Soil Carbon Footprint in Bangladesh

Authors: MD Mashiur Rahman, Muhammad Arshadul Haque

Abstract:

Emerging conservation agriculture (CA) is an option for improving soil health and maintaining environmental sustainability for intensive agriculture, especially in the tropical climate. Three years lengthy research experiment was performed in arid climate from 2018 to 2020 at research field of Bangladesh Agricultural Research Station (RARS)F, Jamalpur (soil texture belongs to Agro-Ecological Zone (AEZ)-8/9, 24˚56'11''N latitude and 89˚55'54''E longitude and an altitude of 16.46m) to evaluate the effect of CA approaches on energy use efficiency and a streamlined life cycle greenhouse gas (GHG) emission of wheat production. For this, the conservation tillage practices (strip tillage (ST) and minimum tillage (MT)) were adopted in comparison to the conventional farmers' tillage (CT), with retained a fixed level (30 cm) of residue retention. This study examined the relationship between energy consumption and life cycle greenhouse gas (GHG) emission of wheat cultivation in Jamalpur region of Bangladesh. Standard energy equivalents megajoules (MJ) were used to measure energy from different inputs and output, similarly, the global warming potential values for the 100-year timescale and a standard unit kilogram of carbon dioxide equivalent (kg CO₂eq) was used to estimate direct and indirect GHG emissions from the use of on-farm and off-farm inputs. Farm efficiency analysis tool (FEAT) was used to analyze GHG emission and its intensity. A non-parametric data envelopment (DEA) analysis was used to estimate the optimum energy requirement of wheat production. The results showed that the treatment combination having MT with optimum energy inputs is the best suit for cost-effective, sustainable CA practice in wheat cultivation without compromising with the yield during the dry season. A total of 22045.86 MJ ha⁻¹, 22158.82 MJ ha⁻¹, and 23656.63 MJ ha⁻¹ input energy for the practice of ST, MT, and CT was used in wheat production, and output energy was calculated as 158657.40 MJ ha⁻¹, 162070.55 MJ ha⁻¹, and 149501.58 MJ ha⁻¹, respectively; where energy use efficiency/net energy ratio was found to be 7.20, 7.31 and 6.32. Among these, MT is the most effective practice option taken into account in the wheat production process. The optimum energy requirement was found to be 18236.71 MJ ha⁻¹ demonstrating for the practice of MT that if recommendations are followed, 18.7% of input energy can be saved. The total greenhouse gas (GHG) emission was calculated to be 2288 kgCO₂eq ha⁻¹, 2293 kgCO₂eq ha⁻¹ and 2331 kgCO₂eq ha⁻¹, where GHG intensity is the ratio of kg CO₂eq emission per MJ of output energy produced was estimated to be 0.014 kg CO₂/MJ, 0.014 kg CO₂/MJ and 0.015 kg CO₂/MJ in wheat production. Therefore, CA approaches ST practice with 30 cm residue retention was the most effective GHG mitigation option when the net life cycle GHG emission was considered in wheat production in the silt clay loam soil of Bangladesh. In conclusion, the CA approaches being implemented for wheat production involving MT practice have the potential to mitigate global warming potential in Bangladesh to achieve soil carbon footprint, where the life cycle assessment approach needs to be applied to a more diverse range of wheat-based cropping systems.

Keywords: conservation agriculture and tillage, energy use efficiency, life cycle GHG, Bangladesh

Procedia PDF Downloads 80
372 Fulfillment of Models of Prenatal Care in Adolescents from Mexico and Chile

Authors: Alejandra Sierra, Gloria Valadez, Adriana Dávalos, Mirliana Ramírez

Abstract:

For years, the Pan American Health Organization/World Health Organization and other organizations have made efforts to the improve access and the quality of prenatal care as part of comprehensive programs for maternal and neonatal health, the standards of care have been renewed in order to migrate from a medical perspective to a holistic perspective. However, despite the efforts currently antenatal care models have not been verified by a scientific evaluation in order to determine their effectiveness. The teenage pregnancy is considered as a very important phenomenon since it has been strongly associated with inequalities, poverty and the lack of gender quality; therefore it is important to analyze the antenatal care that’s been given, including not only the clinical intervention but also the activities surrounding the advertising and the health education. In this study, the objective was to describe if the previously established activities (on the prenatal care models) are being performed in the care of pregnant teenagers attending prenatal care in health institutions in two cities in México and Chile during 2013. Methods: Observational and descriptive study, of a transversal cohort. 170 pregnant women (13-19 years) were included in prenatal care in two health institutions (100 women from León-Mexico and 70 from Chile-Coquimbo). Data collection: direct survey, perinatal clinical record card which was used as checklists: WHO antenatal care model WHO-2003, Official Mexican Standard NOM-007-SSA2-1993 and Personalized Service Manual on Reproductive Process- Chile Crece Contigo; for data analysis descriptive statistics were used. The project was approved by the relevant ethics committees. Results: Regarding the fulfillment of interventions focused on physical, gynecological exam, immunizations, monitoring signs and biochemical parameters in both groups was met by more than 84%; the activities of guidance and counseling pregnant teenagers in Leon compliance rates were below 50%, on the other hand, although pregnant women in Coquimbo had a higher percentage of compliance, no one reached 100%. The topics that less was oriented were: family planning, signs and symptoms of complications and labor. Conclusions: Although the coverage of the interventions indicated in the prenatal care models was high, there were still shortcomings in the fulfillment of activities to orientation, education and health promotion. Deficiencies in adherence to prenatal care guidelines could be due to different circumstances such as lack of registration or incomplete filling of medical records, lack of medical supplies or health personnel, absences of people at prenatal check-up appointments, among many others. Therefore, studies are required to evaluate the quality of prenatal care and the effectiveness of existing models, considering the role of the different actors (pregnant women, professionals and health institutions) involved in the functionality and quality of prenatal care models, in order to create strategies to design or improve the application of a complete process of promotion and prevention of maternal and child health as well as sexual and reproductive health in general.

Keywords: adolescent health, health systems, maternal health, primary health care

Procedia PDF Downloads 195
371 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes

Authors: Karolina Wieczorek, Sophie Wiliams

Abstract:

Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.

Keywords: automated, algorithm, NLP, COVID-19

Procedia PDF Downloads 83
370 Evaluating the ‘Assembled Educator’ of a Specialized Postgraduate Engineering Course Using Activity Theory and Genre Ecologies

Authors: Simon Winberg

Abstract:

The landscape of professional postgraduate education is changing: the focus of these programmes is moving from preparing candidates for a life in academia towards a focus of training in expert knowledge and skills to support industry. This is especially pronounced in engineering disciplines where increasingly more complex products are drawing on a depth of knowledge from multiple fields. This connects strongly with the broader notion of Industry 4.0 – where technology and society are being brought together to achieve more powerful and desirable products, but products whose inner workings also are more complex than before. The changes in what we do, and how we do it, has a profound impact on what industry would like universities to provide. One such change is the increased demand for taught doctoral and Masters programmes. These programmes aim to provide skills and training for professionals, to expand their knowledge of state-of-the-art tools and technologies. This paper investigates one such course, namely a Software Defined Radio (SDR) Master’s degree course. The teaching support for this course had to be drawn from an existing pool of academics, none of who were specialists in this field. The paper focuses on the kind of educator, a ‘hybrid academic’, assembled from available academic staff and bolstered by research. The conceptual framework for this paper combines Activity Theory and Genre Ecology. Activity Theory is used to reason about learning and interactions during the course, and Genre Ecology is used to model building and sharing of technical knowledge related to using tools and artifacts. Data were obtained from meetings with students and lecturers, logs, project reports, and course evaluations. The findings show how the course, which was initially academically-oriented, metamorphosed into a tool-dominant peer-learning structure, largely supported by the sharing of technical tool-based knowledge. While the academic staff could address gaps in the participants’ fundamental knowledge of radio systems, the participants brought with them extensive specialized knowledge and tool experience which they shared with the class. This created a complicated dynamic in the class, which centered largely on engagements with technology artifacts, such as simulators, from which knowledge was built. The course was characterized by a richness of ‘epistemic objects’, which is to say objects that had knowledge-generating qualities. A significant portion of the course curriculum had to be adapted, and the learning methods changed to accommodate the dynamic interactions that occurred during classes. This paper explains the SDR Masters course in terms of conflicts and innovations in its activity system, as well as the continually hybridizing genre ecology to show how the structuring and resource-dependence of the course transformed from its initial ‘traditional’ academic structure to a more entangled arrangement over time. It is hoped that insights from this paper would benefit other educators involved in the design and teaching of similar types of specialized professional postgraduate taught programmes.

Keywords: professional postgraduate education, taught masters, engineering education, software defined radio

Procedia PDF Downloads 70
369 Assessment of Environmental Impact for Rice Mills in Burdwan District: Special Emphasis on Groundwater, Surface Water, Soil, Vegetation and Human Health

Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay

Abstract:

Rice milling is an important activity in agricultural economy of India, particularly the Burdwan district. However, the environmental impact of rice mills is frequently underestimated. The environmental impact of rice mills in the Burdwan district is a major source of concern, given the importance of rice milling in the local economy and food supply. In the Burdwan district, more than fifty (50) rice mills are in operation. The goal of this study is to investigate the effects of rice mills on several environmental components, with a particular emphasis on groundwater, surface water, soil, and vegetation. The research comprises a thorough review of numerous rice mills located around the district, utilising both qualitative and quantitative approaches. Water samples taken from wells near rice mills will be tested for groundwater quality, with an emphasis on factors such as heavy metal pollution and pollutant concentrations. Monitoring rice mill discharge into neighbouring bodies of water and studying the potential impact on aquatic ecosystems will be part of surface water evaluations. Furthermore, soil samples from the surrounding areas will be taken to examine changes in soil characteristics, nutrient content, and potential contamination from milling waste disposal. Vegetation studies will be conducted to investigate the effects of emissions and effluents on plant health and biodiversity in the region. The findings will provide light on the extent of environmental degradation caused by rice mills in the Burdwan district, as well as valuable insight into the effects of such operations on water, soil, and vegetation. The findings will aid in the development of appropriate legislation and regulations to reduce negative environmental repercussions and promote sustainable practises in the rice milling business. In some cases, heavy metals have been related to health problems. Heavy metals (As, Cd, Cu, Pb, Cr, Hg) are linked to skin, lung, brain, kidney, liver, metabolic, spleen, cardiovascular, haematological, immunological, gastrointestinal, testes, pancreatic, metabolic, and bone problems. As a result, this study contributes to a better knowledge of industrial environmental impacts and establishes the framework for future studies aimed at developing a more ecologically balanced and resilient Burdwan district. The following recommendations are offered for reducing the rice mill's environmental impact: To keep untreated effluents out of bodies of water, adequate waste management systems must be established. Use environmentally friendly rice milling processes to reduce pollution. To avoid soil pollution, rice mill by-products should be used as fertiliser in a controlled and appropriate manner. Groundwater, surface water, soil, and vegetation are all regularly monitored in order to study and adapt to environmental changes. By adhering to these principles, the rice milling industry of Burdwan district may achieve long-term growth while lowering its environmental effect and safeguarding the environment for future generations.

Keywords: groundwater, environmental analysis, biodiversity, rice mill, waste management, diseases, industrial impact

Procedia PDF Downloads 69
368 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 36
367 Organisational Mindfulness Case Study: A 6-Week Corporate Mindfulness Programme Significantly Enhances Organisational Well-Being

Authors: Dana Zelicha

Abstract:

A 6-week mindfulness programme was launched to improve the well being and performance of 20 managers (including the supervisor) of an international corporation in London. A unique assessment methodology was customised to the organisation’s needs, measuring four parameters: prioritising skills, listening skills, mindfulness levels and happiness levels. All parameters showed significant improvements (p < 0.01) post intervention, with a remarkable increase in listening skills and mindfulness levels. Although corporate mindfulness programmes have proven to be effective, the challenge remains the low engagement levels at home and the implementation of these tools beyond the scope of the intervention. This study has offered an innovative approach to enforce home engagement levels, which yielded promising results. The programme launched with a 2-day introduction intervention, which was followed by a 6-week training course (1 day a week; 2 hours each). Participants learned all basic principles of mindfulness such as mindfulness meditations, Mindfulness Based Stress Reduction (MBSR) techniques and Mindfulness Based Cognitive Therapy (MBCT) practices to incorporate into their professional and personal lives. The programme contained experiential mindfulness meditations and innovative mindfulness tools (OWBA-MT) created by OWBA - The Well Being Agency. Exercises included Mindful Meetings, Unitasking and Mindful Feedback. All sessions concluded with guided discussions and group reflections. One fundamental element of this programme was engagement level outside of the workshop. In the office, participants connected with a mindfulness buddy - a team member in the group with whom they could find support throughout the programme. At home, participants completed online daily mindfulness forms that varied according to weekly themes. These customised forms gave participants the opportunity to reflect on whether they made time for daily mindfulness practice, and to facilitate a sense of continuity and responsibility. At the end of the programme, the most engaged team member was crowned the ‘mindful maven’ and received a special gift. The four parameters were measured using online self-reported questionnaires, including the Listening Skills Inventory (LSI), Mindfulness Attention Awareness Scale (MAAS), Time Management Behaviour Scale (TMBS) and a modified version of the Oxford Happiness Questionnaire (OHQ). Pre-intervention questionnaires were collected at the start of the programme, and post-intervention data was collected 4-weeks following completion. Quantitative analysis using paired T-tests of means showed significant improvements, with a 23% increase in listening skills, a 22% improvement in mindfulness levels, a 12% increase in prioritising skills, and an 11% improvement in happiness levels. Participant testimonials exhibited high levels of satisfaction and the overall results indicate that the mindfulness programme substantially impacted the team. These results suggest that 6-week mindfulness programmes can improve employees’ capacities to listen and work well with others, to effectively manage time and to experience enhanced satisfaction both at work and in life. Limitations noteworthy to consider include the afterglow effect and lack of generalisability, as this study was conducted on a small and fairly homogenous sample.

Keywords: corporate mindfulness, listening skills, organisational well being, prioritising skills, mindful leadership

Procedia PDF Downloads 257
366 Effectiveness of Prehabilitation on Improving Emotional and Clinical Recovery of Patients Undergoing Open Heart Surgeries

Authors: Fatma Ahmed, Heba Mostafa, Bassem Ramdan, Azza El-Soussi

Abstract:

Background: World Health Organization stated that by 2020 cardiac disease will be the number one cause of death worldwide and estimates that 25 million people per year will suffer from heart disease. Cardiac surgery is considered an effective treatment for severe forms of cardiovascular diseases that cannot be treated by medical treatment or cardiac interventions. In spite of the benefits of cardiac surgery, it is considered a major stressful experience for patients who are candidate for surgery. Prehabilitation can decrease incidences of postoperative complications as it prepares patients for surgical stress through enhancing their defenses to meet the demands of surgery. When patients anticipate the postoperative sequence of events, they will prepare themselves to act certain behaviors, identify their roles and actively participate in their own recovery, therefore, anxiety levels are decreased and functional capacity is enhanced. Prehabilitation programs can comprise interventions that include physical exercise, psychological prehabilitation, nutritional optimization and risk factor modification. Physical exercises are associated with improvements in the functioning of the various physiological systems, reflected in increased functional capacity, improved cardiac and respiratory functions and make patients fit for surgical intervention. Prehabilitation programs should also prepare patients psychologically in order to cope with stress, anxiety and depression associated with postoperative pain, fatigue, limited ability to perform the usual activities of daily living through acting in a healthy manner. Notwithstanding the benefits of psychological preparations, there are limited studies which investigated the effect of psychological prehabilitation to confirm its effect on psychological, quality of life and physiological outcomes of patients who had undergone cardiac surgery. Aim of the study: The study aims to determine the effect of prehabilitation interventions on outcomes of patients undergoing cardiac surgeries. Methods: Quasi experimental study design was used to conduct this study. Sixty eligible and consenting patients were recruited and divided into two groups: control and intervention group (30 participants in each). One tool namely emotional, physiological, clinical, cognitive and functional capacity outcomes of prehabilitation intervention assessment tool was utilized to collect the data of this study. Results: Data analysis showed significant improvement in patients' emotional state, physiological and clinical outcomes (P < 0.000) with the use of prehabilitation interventions. Conclusions: Cardiac prehabilitation in the form of providing information about surgery, circulation exercise, deep breathing exercise, incentive spirometer training and nutritional education implemented daily by patients scheduled for elective open heart surgery one week before surgery have been shown to improve patients' emotional state, physiological and clinical outcomes.

Keywords: emotional recovery, clinical recovery, coronary artery bypass grafting patients, prehabilitation

Procedia PDF Downloads 187
365 Application of the Carboxylate Platform in the Consolidated Bioconversion of Agricultural Wastes to Biofuel Precursors

Authors: Sesethu G. Njokweni, Marelize Botes, Emile W. H. Van Zyl

Abstract:

An alternative strategy to the production of bioethanol is by examining the degradability of biomass in a natural system such as the rumen of mammals. This anaerobic microbial community has higher cellulolytic activities than microbial communities from other habitats and degrades cellulose to produce volatile fatty acids (VFA), methane and CO₂. VFAs have the potential to serve as intermediate products for electrochemical conversion to hydrocarbon fuels. In vitro mimicking of this process would be more cost-effective than bioethanol production as it does not require chemical pre-treatment of biomass, a sterile environment or added enzymes. The strategies of the carboxylate platform and the co-cultures of a bovine ruminal microbiota from cannulated cows were combined in order to investigate and optimize the bioconversion of agricultural biomass (apple and grape pomace, citrus pulp, sugarcane bagasse and triticale straw) to high value VFAs as intermediates for biofuel production in a consolidated bioprocess. Optimisation of reactor conditions was investigated using five different ruminal inoculum concentrations; 5,10,15,20 and 25% with fixed pH at 6.8 and temperature at 39 ˚C. The ANKOM 200/220 fiber analyser was used to analyse in vitro neutral detergent fiber (NDF) disappearance of the feedstuffs. Fresh and cryo-frozen (5% DMSO and 50% glycerol for 3 months) rumen cultures were tested for the retainment of fermentation capacity and durability in 72 h fermentations in 125 ml serum vials using a FURO medical solutions 6-valve gas manifold to induce anaerobic conditions. Fermentation of apple pomace, triticale straw, and grape pomace showed no significant difference (P > 0.05) in the effect of 15 and 20 % inoculum concentrations for the total VFA yield. However, high performance liquid chromatographic separation within the two inoculum concentrations showed a significant difference (P < 0.05) in acetic acid yield, with 20% inoculum concentration being the optimum at 4.67 g/l. NDF disappearance of 85% in 96 h and total VFA yield of 11.5 g/l in 72 h (A/P ratio = 2.04) for apple pomace entailed that it was the optimal feedstuff for this process. The NDF disappearance and VFA yield of DMSO (82% NDF disappearance and 10.6 g/l VFA) and glycerol (90% NDF disappearance and 11.6 g/l VFA) stored rumen also showed significantly similar degradability of apple pomace with lack of treatment effect differences compared to a fresh rumen control (P > 0.05). The lack of treatment effects was a positive sign in indicating that there was no difference between the stored samples and the fresh rumen control. Retaining of the fermentation capacity within the preserved cultures suggests that its metabolic characteristics were preserved due to resilience and redundancy of the rumen culture. The amount of degradability and VFA yield within a short span was similar to other carboxylate platforms that have longer run times. This study shows that by virtue of faster rates and high extent of degradability, small scale alternatives to bioethanol such as rumen microbiomes and other natural fermenting microbiomes can be employed to enhance the feasibility of biofuels large-scale implementation.

Keywords: agricultural wastes, carboxylate platform, rumen microbiome, volatile fatty acids

Procedia PDF Downloads 112
364 Regulation Effect of Intestinal Microbiota by Fermented Processing Wastewater of Yuba

Authors: Ting Wu, Feiting Hu, Xinyue Zhang, Shuxin Tang, Xiaoyun Xu

Abstract:

As a by-product of yuba, processing wastewater of Yuba (PWY) contains many bioactive components such as soybean isoflavones, soybean polysaccharides and soybean oligosaccharides, which is a good source of prebiotics and has a potential of high value utilization. The use of Lactobacillus plantarum to ferment PWY can be considered as a potential biogenic element, which can regulate the balance of intestinal microbiota. In this study, firstly, Lactobacillus plantarum was used to ferment PWY to improve its content of active components and antioxidant activity. Then, the health effect of fermented processing wastewater of yuba (FPWY) was measured in vitro. Finally, microencapsulation technology was used applied to improve the sustained release of FPWY and reduce the loss of active components in the digestion process, as well as to improving the activity of FPWY. The main results are as follows: (1) FPWY presented a good antioxidant capacity with DPPH free radical scavenging ability (0.83 ± 0.01 mmol Trolox/L), ABTS free radical scavenging ability (7.47 ± 0.35 mmol Trolox/L) and iron ion reducing ability (1.11 ± 0.07 mmol Trolox/L). Compared with non-fermented processing wastewater of yuba (NFPWY), there was no significant difference in the content of total soybean isoflavones, but the content of glucoside soybean isoflavones decreased, and aglyconic soybean isoflavones increased significantly. After fermentation, PWY can effectively reduce the soluble monosaccharides, disaccharides and oligosaccharides, such as glucose, fructose, galactose, trehalose, stachyose, maltose, raffinose and sucrose. (2) FPWY can significantly enhance the growth of beneficial bacteria such as Bifidobacterium, Ruminococcus and Akkermansia, significantly inhibit the growth of harmful bacteria E.coli, regulate the structure of intestinal microbiota, and significantly increase the content of short-chain fatty acids such as acetic acid, propionic acid, butyric acid, isovaleric acid. Higher amount of lactic acid in the gut can be further broken down into short chain fatty acids. (3) In order to improve the stability of soybean isoflavones in FPWY during digestion, sodium alginate and chitosan were used as wall materials for embedding. The FPWY freeze-dried powder was embedded by the method of acute-coagulation bath. The results show that when the core wall ratio is 3:1, the concentration of chitosan is 1.5%, the concentration of sodium alginate is 2.0%, and the concentration of calcium is 3%, the embossing rate is 53.20%. In the simulated in vitro digestion stage, the release rate of microcapsules reached 59.36% at the end of gastric digestion and 82.90% at the end of intestinal digestion. Therefore, the core materials with good sustained-release performance of microcapsules were almost all released. The structural analysis results of FPWY microcapsules show that the microcapsules have good mechanical properties. Its hardness, springness, cohesiveness, gumminess, chewiness and resilience were 117.75± 0.21 g, 0.76±0.02, 0.54±0.01, 63.28±0.71 g·sec, 48.03±1.37 g·sec, 0.31±0.01, respectively. Compared with the unembedded FPWY, the infrared spectrum results showed that the microcapsules had embedded effect on the FPWY freeze-dried powder.

Keywords: processing wastewater of yuba, lactobacillus plantarum, intestinal microbiota, microcapsule

Procedia PDF Downloads 64
363 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 22
362 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing

Authors: Leonie Bradfield, Stephen Fityus, John Simmons

Abstract:

The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.

Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump

Procedia PDF Downloads 151
361 Mixed Monolayer and PEG Linker Approaches to Creating Multifunctional Gold Nanoparticles

Authors: D. Dixon, J. Nicol, J. A. Coulter, E. Harrison

Abstract:

The ease with which they can be functionalized, combined with their excellent biocompatibility, make gold nanoparticles (AuNPs) ideal candidates for various applications in nanomedicine. Indeed several promising treatments are currently undergoing human clinical trials (CYT-6091 and Auroshell). A successful nanoparticle treatment must first evade the immune system, then accumulate within the target tissue, before enter the diseased cells and delivering the payload. In order to create a clinically relevant drug delivery system, contrast agent or radiosensitizer, it is generally necessary to functionalize the AuNP surface with multiple groups; e.g. Polyethylene Glycol (PEG) for enhanced stability, targeting groups such as antibodies, peptides for enhanced internalization, and therapeutic agents. Creating and characterizing the biological response of such complex systems remains a challenge. The two commonly used methods to attach multiple groups to the surface of AuNPs are the creation of a mixed monolayer, or by binding groups to the AuNP surface using a bi-functional PEG linker. While some excellent in-vitro and animal results have been reported for both approaches further work is necessary to directly compare the two methods. In this study AuNPs capped with both PEG and a Receptor Mediated Endocytosis (RME) peptide were prepared using both mixed monolayer and PEG linker approaches. The PEG linker used was SH-PEG-SGA which has a thiol at one end for AuNP attachment, and an NHS ester at the other to bind to the peptide. The work builds upon previous studies carried out at the University of Ulster which have investigated AuNP synthesis, the influence of PEG on stability in a range of media and investigated intracellular payload release. 18-19nm citrate capped AuNPs were prepared using the Turkevich method via the sodium citrate reduction of boiling 0.01wt% Chloroauric acid. To produce PEG capped AuNPs, the required amount of PEG-SH (5000Mw) or SH-PEG-SGA (3000Mw Jenkem Technologies) was added, and the solution stirred overnight at room temperature. The RME (sequence: CKKKKKKSEDEYPYVPN, Biomatik) co-functionalised samples were prepared by adding the required amount of peptide to the PEG capped samples and stirring overnight. The appropriate amounts of PEG-SH and RME peptide were added to the AuNP to produce a mixed monolayer consisting of approximately 50% PEG and 50% RME. The PEG linker samples were first fully capped with bi-functional PEG before being capped with RME peptide. An increase in diameter from 18-19mm for the ‘as synthesized’ AuNPs to 40-42nm after PEG capping was observed via DLS. The presence of PEG and RME peptide on both the mixed monolayer and PEG linker co-functionalized samples was confirmed by both FTIR and TGA. Bi-functional PEG linkers allow the entire AuNP surface to be capped with PEG, enabling in-vitro stability to be achieved using a lower molecular weight PEG. The approach also allows the entire outer surface to be coated with peptide or other biologically active groups, whilst also offering the promise of enhanced biological availability. The effect of mixed monolayer versus PEG linker attachment on both stability and non-specific protein corona interactions was also studied.

Keywords: nanomedicine, gold nanoparticles, PEG, biocompatibility

Procedia PDF Downloads 317
360 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations

Authors: Sini George

Abstract:

Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.

Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations

Procedia PDF Downloads 155
359 Educational Audit and Curricular Reforms in the Arabian Context

Authors: Irum Naz

Abstract:

In the Arabian higher education context, linguistic proficiency in the English language is considered crucial for the developmental sustainability, economic growth, and stability of communities and societies. Qatar’s educational reforms package, through the 2030 vision, identifies the acquisition of English at K-12 as an essential survival communication tool for globalization, believing that Qatari students need better preparation to take on the responsibilities of leadership and to participate effectively in the country’s surging economy. The idea of introducing Qatari students to modern curricula benchmarked to high-student-performance curricula in developed countries is one of the components of reformatory design principles of Education for New Era reform project that is mutually consented to and supported by the Office of Shared Services, Communications Office, and Supreme Education Council. In appreciation of the government’s vision, the English Language Centre (ELC) at the Community College of Qatar ran an internal educational audit and conducted evaluative research to understand and appraise the value, impact, and practicality of the existing ELC language development program. This study sought to identify the type of change that could identify and improve the quality of Foundation Program courses and the manners in which second language learners could be assisted to transit smoothly between (ELC) levels. Following the interpretivist paradigm and mixed research method, the data was gathered through a bicyclic research model and a triangular design. The analyses of the data suggested that there was a need for improvement in the ELC program as a whole, and particularly in terms of curriculum, student learning outcomes, and the general learning environment in the department. Key findings suggest that the target program would benefit from significant revisions, which would include narrowing the focus of the courses, providing sets of specific learning objectives, and preventing repetition between levels. Another promising finding was about the assessment tools and process. The data suggested that a set of standardized assessments that more closely suited the programs of study should be devised. It was also recommended that students undergo a more comprehensive placement process to ensure that they begin the program at an appropriate level and get the maximum benefit from their learning experience. Although this ties into the idea of curriculum revamp, it was expected that students could leave the ELC having had exposure to courses in English for specific purposes. The idea of a more reliable exit assessment for students was raised frequently so ELC could regulate itself and ensure optimum learning outcomes. Another important recommendation was the provision of a Student Learning Center for students that would help them to receive personalized tuition, differentiated instruction, and self-driven and self-evaluated learning experience. In addition, an extra study level was recommended to be added to the program to accommodate the different levels of English language proficiency represented among ELC students. The evidence collected in the course of conducting the study suggests that significant change is needed in the structure of the ELC program, specifically about curriculum, the program learning outcomes, and the learning environment in general.

Keywords: educational audit, ESL, optimum learning outcomes, Qatar’s educational reforms, self-driven and self-evaluated learning experience, Student Learning Center

Procedia PDF Downloads 172
358 3D Printing of Polycaprolactone Scaffold with Multiscale Porosity Via Incorporation of Sacrificial Sucrose Particles

Authors: Mikaela Kutrolli, Noah S. Pereira, Vanessa Scanlon, Mohamadmahdi Samandari, Ali Tamayol

Abstract:

Bone tissue engineering has drawn significant attention and various biomaterials have been tested. Polymers such as polycaprolactone (PCL) offer excellent biocompatibility, reasonable mechanical properties, and biodegradability. However, PCL scaffolds suffer a critical drawback: a lack of micro/mesoporosity, affecting cell attachment, tissue integration, and mineralization. It also results in a slow degradation rate. While 3D-printing has addressed the issue of macroporosity through CAD-guided fabrication, PCL scaffolds still exhibit poor smaller-scale porosity. To overcome this, we generated composites of PCL, hydroxyapatite (HA), and powdered sucrose (PS). The latter serves as a sacrificial material to generate porous particles after sucrose dissolution. Additionally, we have incorporated dexamethasone (DEX) to boost the PCL osteogenic properties. The resulting scaffolds maintain controlled macroporosity from the lattice print structure but also develop micro/mesoporosity within PCL fibers when exposed to aqueous environments. The study involved mixing PS into solvent-dissolved PCL in different weight ratios of PS to PCL (70:30, 50:50, and 30:70 wt%). The resulting composite was used for 3D printing of scaffolds at room temperature. Printability was optimized by adjusting pressure, speed, and layer height through filament collapse and fusion test. Enzymatic degradation, porogen leaching, and DEX release profiles were characterized. Physical properties were assessed using wettability, SEM, and micro-CT to quantify the porosity (percentage, pore size, and interconnectivity). Raman spectroscopy was used to verify the absence of sugar after leaching. Mechanical characteristics were evaluated via compression testing before and after porogen leaching. Bone marrow stromal cells (BMSCs) behavior in the printed scaffolds was studied by assessing viability, metabolic activity, osteo-differentiation, and mineralization. The scaffolds with a 70% sugar concentration exhibited superior printability and reached the highest porosity of 80%, but performed poorly during mechanical testing. A 50% PS concentration demonstrated a 70% porosity, with an average pore size of 25 µm, favoring cell attachment. No trace of sucrose was found in Raman after leaching the sugar for 8 hours. Water contact angle results show improved hydrophilicity as the sugar concentration increased, making the scaffolds more conductive to cell adhesion. The behavior of bone marrow stromal cells (BMSCs) showed positive viability and proliferation results with an increasing trend of mineralization and osteo-differentiation as the sucrose concentration increased. The addition of HA and DEX also promoted mineralization and osteo-differentiation in the cultures. The integration of PS as porogen at a concentration of 50%wt within PCL scaffolds presents a promising approach to address the poor cell attachment and tissue integration issues of PCL in bone tissue engineering. The method allows for the fabrication of scaffolds with tunable porosity and mechanical properties, suitable for various applications. The addition of HA and DEX further enhanced the scaffolds. Future studies will apply the scaffolds in an in-vivo model to thoroughly investigate their performance.

Keywords: bone, PCL, 3D printing, tissue engineering

Procedia PDF Downloads 34
357 Attracting Tourists: Architecture for Tourism during the Period of Korean Empire, 1897–1910

Authors: Lina Shinhwa Koo

Abstract:

The Korean Empire, or Daehanjeguk, was proclaimed by King Gojong (1852–1919) in 1897 with the aim of promoting its sovereignty as a nation-state amid the political situation with threats from neighbouring countries, such as Japan and Russia. The Korean Empire period (1897–1910), which lasted until 1910, when Japan annexed Korea, is a pivotal time in the modern history of Korea. It was also during the period when many infrastructures for tourism, including transportation and lodging systems, were established. Throughout the Korean Empire period, tourists from Japan and Euro-American countries popularly visited Korea after it opened its doors relatively recently. The government of the Korean Empire also actively engaged with foreign officials and professionals. Train stations were built to connect Busan, where foreigners first arrived through the port of Jemulpo, with Seoul, the capital of Korea. In addition, hotels were built to accommodate the increasing number of tourists. Shedding new light on the modern architectural history of Korea, this paper discusses buildings that were made for tourism during the Korean Empire period to examine the historical background behind the tourism development in Korea and the concept of travelling related to architecture history. Foreigners came to Korea for varying reasons, from ethnographic research and diplomacy to business and missionary. They also played a key role in the transportation and hotel businesses. For instance, American entrepreneur James R. Morse received a concession to construct a railway between Busan and Seoul in 1896, which was later granted to a Japanese firm. Japanese entrepreneurs came to Korea and built hotels, such as Daebul Hotel in Incheon and Paseonggwan in Seoul. Sontag Hotel, Station Hotel and Hotel du Palais, all located in central areas of Seoul, were owned by German, British and French entrepreneurs, respectively. Each building showed distinctive architectural elements. For example, Sontag Hotel was built in Russian architectural style, whereas Paseonggwan was created with a combination of Japanese and European styles. Such various architectural designs indicated the multicultural urban scenes of the Korean Empire at the time. The existing scholarship has paid more attention to the royal buildings built during the Korean Empire period, such as Seokjojeon of the Duksu Palace. However, it is important to study the tourism-related architecture that reflected the societal situation of the Korean Empire when contrasting ideologies, landscapes, historical narratives and political tensions intertwined and co-existed. Examining both textual and visual resources, such as news articles and photographs, this paper surveys architectural styles and the trajectories of selective examples of hotels and train stations within the discussion of temporality and spatiality in the discipline of social science. In doing so, one can re-assess the history of the Korean Empire as the intersection of modern and traditional, intrinsic and extrinsic and national and international.

Keywords: Korean empire, modern Korean architecture, tourism, hotel, train station

Procedia PDF Downloads 57
356 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery

Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal

Abstract:

Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.

Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT

Procedia PDF Downloads 218
355 Simulation-based Decision Making on Intra-hospital Patient Referral in a Collaborative Medical Alliance

Authors: Yuguang Gao, Mingtao Deng

Abstract:

The integration of independently operating hospitals into a unified healthcare service system has become a strategic imperative in the pursuit of hospitals’ high-quality development. Central to the concept of group governance over such transformation, exemplified by a collaborative medical alliance, is the delineation of shared value, vision, and goals. Given the inherent disparity in capabilities among hospitals within the alliance, particularly in the treatment of different diseases characterized by Disease Related Groups (DRG) in terms of effectiveness, efficiency and resource utilization, this study aims to address the centralized decision-making of intra-hospital patient referral within the medical alliance to enhance the overall production and quality of service provided. We first introduce the notion of production utility, where a higher production utility for a hospital implies better performance in treating patients diagnosed with that specific DRG group of diseases. Then, a Discrete-Event Simulation (DES) framework is established for patient referral among hospitals, where patient flow modeling incorporates a queueing system with fixed capacities for each hospital. The simulation study begins with a two-member alliance. The pivotal strategy examined is a "whether-to-refer" decision triggered when the bed usage rate surpasses a predefined threshold for either hospital. Then, the decision encompasses referring patients to the other hospital based on DRG groups’ production utility differentials as well as bed availability. The objective is to maximize the total production utility of the alliance while minimizing patients’ average length of stay and turnover rate. Thus the parameter under scrutiny is the bed usage rate threshold, influencing the efficacy of the referral strategy. Extending the study to a three-member alliance, which could readily be generalized to multi-member alliances, we maintain the core setup while introducing an additional “which-to-refer" decision that involves referring patients with specific DRG groups to the member hospital according to their respective production utility rankings. The overarching goal remains consistent, for which the bed usage rate threshold is once again a focal point for analysis. For the two-member alliance scenario, our simulation results indicate that the optimal bed usage rate threshold hinges on the discrepancy in the number of beds between member hospitals, the distribution of DRG groups among incoming patients, and variations in production utilities across hospitals. Transitioning to the three-member alliance, we observe similar dependencies on these parameters. Additionally, it becomes evident that an imbalanced distribution of DRG diagnoses and further disparity in production utilities among member hospitals may lead to an increase in the turnover rate. In general, it was found that the intra-hospital referral mechanism enhances the overall production utility of the medical alliance compared to individual hospitals without partnership. Patients’ average length of stay is also reduced, showcasing the positive impact of the collaborative approach. However, the turnover rate exhibits variability based on parameter setups, particularly when patients are redirected within the alliance. In conclusion, the re-structuring of diagnostic disease groups within the medical alliance proves instrumental in improving overall healthcare service outcomes, providing a compelling rationale for the government's promotion of patient referrals within collaborative medical alliances.

Keywords: collaborative medical alliance, disease related group, patient referral, simulation

Procedia PDF Downloads 31
354 Timely Palliative Screening and Interventions in Oncology

Authors: Jaci Marie Mastrandrea, Rosario Haro

Abstract:

Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening and intervention is directly associated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project was to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated, evidence-based PC referral criteria. The tool was initially implemented using paper forms, and data was collected over a period of eight weeks. Patients were screened by nurses on the SLCTC oncology treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher received an educational handout on the topic of PC and education about PC and symptom management. A score of five or higher indicates that PC referral is strongly recommended, and the patient’s EHR is flagged for the oncology provider to review orders for PC referral. The PSNA tool was approved by Sky Lakes administration for full integration into Epic-Beacon. The project lead collaborated with the Sky Lakes’ information systems team and representatives from Epic on the tool’s aesthetic and functionality within the Epic system. SLCTC nurses and physicians were educated on how to document the PSNA within Epic and where to view results. Results: Prior to the implementation of the PSNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the completed screening assessments of 100 patients under active treatment at the SLCTC. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting criteria were flagged in EPIC for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met the criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.

Keywords: oncology, palliative and supportive care, symptom management, outpatient oncology, palliative screening tool

Procedia PDF Downloads 97
353 Developing and Standardizing Individual Care Plan for Children in Conflict with Law in the State of Kerala

Authors: Kavitha Puthanveedu, Kasi Sekar, Preeti Jacob, Kavita Jangam

Abstract:

In India, The Juvenile Justice (Care and Protection of Children) Act, 2015, the law related to children alleged and found to be in conflict with law, proposes to address to the rehabilitation of children in conflict with law by catering to the basic rights by providing care and protection, development, treatment, and social re-integration. A major concern in addressing the issues of children in conflict with law in Kerala the southernmost state in India identified were: 1. Lack of psychological assessment for children in conflict with law, 2. Poor psychosocial intervention for children in conflict with law on bail, 3. Lack of psychosocial intervention or proper care and protection of CCL residing at observation and special home, 4. Lack convergence with systems related with mental health care. Aim: To develop individual care plan for children in conflict with law. Methodology: NIMHANS a premier Institute of Mental Health and Neurosciences, collaborated with Social Justice Department, Govt. of Kerala to address this issue by developing a participatory methodology to implement psychosocial care in the existing services by integrating the activities through multidisciplinary and multisectoral approach as per the Sec. 18 of JJAct 2015. Developing individual care plan: Key informant interviews, focus group discussion with multiple stakeholders consisting of legal officers, police, child protection officials, counselors, and home staff were conducted. Case studies were conducted among children in conflict with law. A checklist on 80 psychosocial problems among children in conflict with law was prepared with eight major issues identified through the quantitative process such as family and parental characteristic, family interactions and relationships, stressful life event, social and environmental factors, child’s individual characteristics, education, child labour and high-risk behavior. Standardised scales were used to identify the anxiety, caseness, suicidality and substance use among the children. This provided a background data understand the psychosocial problems experienced by children in conflict with law. In the second stage, a detailed plan of action was developed involving multiple stakeholders that include Special juvenile police unit, DCPO, JJB, and NGOs. The individual care plan was reviewed by a panel of 4 experts working in the area of children, followed by the review by multiple stakeholders in juvenile justice system such as Magistrates, JJB members, legal cum probation officers, district child protection officers, social workers and counselors. Necessary changes were made in the individual care plan in each stage which was pilot tested with 45 children for a period of one month and standardized for administering among children in conflict with law. Result: The individual care plan developed through scientific process was standardized and currently administered among children in conflict with law in the state of Kerala in the 3 districts that will be further implemented in other 14 districts. The program was successful in developing a systematic approach for the psychosocial intervention of children in conflict with law that can be a forerunner for other states in India.

Keywords: psychosocial care, individual care plan, multidisciplinary, multisectoral

Procedia PDF Downloads 266
352 Biomaterials Solutions to Medical Problems: A Technical Review

Authors: Ashish Thakur

Abstract:

This technical paper was written in view of focusing the biomaterials and its various applications in modern industries. Author tires to elaborate not only the medical, infect plenty of application in other industries. The scope of the research area covers the wide range of physical, biological and chemical sciences that underpin the design of biomaterials and the clinical disciplines in which they are used. A biomaterial is now defined as a substance that has been engineered to take a form which, alone or as part of a complex system, is used to direct, by control of interactions with components of living systems, the course of any therapeutic or diagnostic procedure. Biomaterials are invariably in contact with living tissues. Thus, interactions between the surface of a synthetic material and biological environment must be well understood. This paper reviews the benefits and challenges associated with surface modification of the metals in biomedical applications. The paper also elaborates how the surface characteristics of metallic biomaterials, such as surface chemistry, topography, surface charge, and wettability, influence the protein adsorption and subsequent cell behavior in terms of adhesion, proliferation, and differentiation at the biomaterial–tissue interface. The chapter also highlights various techniques required for surface modification and coating of metallic biomaterials, including physicochemical and biochemical surface treatments and calcium phosphate and oxide coatings. In this review, the attention is focused on the biomaterial-associated infections, from which the need for anti-infective biomaterials originates. Biomaterial-associated infections differ markedly for epidemiology, aetiology and severity, depending mainly on the anatomic site, on the time of biomaterial application, and on the depth of the tissues harbouring the prosthesis. Here, the diversity and complexity of the different scenarios where medical devices are currently utilised are explored, providing an overview of the emblematic applicative fields and of the requirements for anti-infective biomaterials. In addition to this, chapter introduces nanomedicine and the use of both natural and synthetic polymeric biomaterials, focuses on specific current polymeric nanomedicine applications and research, and concludes with the challenges of nanomedicine research. Infection is currently regarded as the most severe and devastating complication associated to the use of biomaterials. Osteoporosis is a worldwide disease with a very high prevalence in humans older than 50. The main clinical consequences are bone fractures, which often lead to patient disability or even death. A number of commercial biomaterials are currently used to treat osteoporotic bone fractures, but most of these have not been specifically designed for that purpose. Many drug- or cell-loaded biomaterials have been proposed in research laboratories, but very few have received approval for commercial use. Polymeric nanomaterial-based therapeutics plays a key role in the field of medicine in treatment areas such as drug delivery, tissue engineering, cancer, diabetes, and neurodegenerative diseases. Advantages in the use of polymers over other materials for nanomedicine include increased functionality, design flexibility, improved processability, and, in some cases, biocompatibility.

Keywords: nanomedicine, tissue, infections, biomaterials

Procedia PDF Downloads 245
351 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 46
350 Human Facial Emotion: A Comparative and Evolutionary Perspective Using a Canine Model

Authors: Catia Correia Caeiro, Kun Guo, Daniel Mills

Abstract:

Despite its growing interest, emotions are still an understudied cognitive process and their origins are currently the focus of much debate among the scientific community. The use of facial expressions as traditional hallmarks of discrete and holistic emotions created a circular reasoning due to a priori assumptions of meaning and its associated appearance-biases. Ekman and colleagues solved this problem and laid the foundations for the quantitative and systematic study of facial expressions in humans by developing an anatomically-based system (independent from meaning) to measure facial behaviour, the Facial Action Coding System (FACS). One way of investigating emotion cognition processes is by applying comparative psychology methodologies and looking at either closely-related species (e.g. chimpanzees) or phylogenetically distant species sharing similar present adaptation problems (analogy). In this study, the domestic dog was used as a comparative animal model to look at facial expressions in social interactions in parallel with human facial expressions. The orofacial musculature seems to be relatively well conserved across mammal species and the same holds true for the domestic dog. Furthermore, the dog is unique in having shared the same social environment as humans for more than 10,000 years, facing similar challenges and acquiring a unique set of socio-cognitive skills in the process. In this study, the spontaneous facial movements of humans and dogs were compared when interacting with hetero- and conspecifics as well as in solitary contexts. In total, 200 participants were examined with FACS and DogFACS (The Dog Facial Action Coding System): coding tools across four different emotionally-driven contexts: a) Happiness (play and reunion), b) anticipation (of positive reward), c) fear (object or situation triggered), and d) frustration (negation of a resource). A neutral control was added for both species. All four contexts are commonly encountered by humans and dogs, are comparable between species and seem to give rise to emotions from homologous brain systems. The videos used in the study were extracted from public databases (e.g. Youtube) or published scientific databases (e.g. AM-FED). The results obtained allowed us to delineate clear similarities and differences on the flexibility of the facial musculature in the two species. More importantly, they shed light on what common facial movements are a product of the emotion linked contexts (the ones appearing in both species) and which are characteristic of the species, revealing an important clue for the debate on the origin of emotions. Additionally, we were able to examine movements that might have emerged for interspecific communication. Finally, our results are discussed from an evolutionary perspective adding to the recent line of work that supports an ancient shared origin of emotions in a mammal ancestor and defining emotions as mechanisms with a clear adaptive purpose essential on numerous situations, ranging from maintenance of social bonds to fitness and survival modulators.

Keywords: comparative and evolutionary psychology, emotion, facial expressions, FACS

Procedia PDF Downloads 418
349 Relationship of Entrepreneurial Ecosystem Factors and Entrepreneurial Cognition: An Exploratory Study Applied to Regional and Metropolitan Ecosystems in New South Wales, Australia

Authors: Sumedha Weerasekara, Morgan Miles, Mark Morrison, Branka Krivokapic-Skoko

Abstract:

This paper is aimed at exploring the interrelationships among entrepreneurial ecosystem factors and entrepreneurial cognition in regional and metropolitan ecosystems. Entrepreneurial ecosystem factors examined include: culture, infrastructure, access to finance, informal networks, support services, access to universities, and the depth and breadth of the talent pool. Using a multivariate approach we explore the impact of these ecosystem factors or elements on entrepreneurial cognition. In doing so, the existing body of knowledge from the literature on entrepreneurial ecosystem and cognition have been blended to explore the relationship between entrepreneurial ecosystem factors and cognition in a way not hitherto investigated. The concept of the entrepreneurial ecosystem has received increased attention as governments, universities and communities have started to recognize the potential of integrated policies, structures, programs and processes that foster entrepreneurship activities by supporting innovation, productivity and employment growth. The notion of entrepreneurial ecosystems has evolved and grown with the advancement of theoretical research and empirical studies. Importance of incorporating external factors like culture, political environment, and the economic environment within a single framework will enhance the capacity of examining the whole systems functionality to better understand the interaction of the entrepreneurial actors and factors within a single framework. The literature on clusters underplays the role of entrepreneurs and entrepreneurial management in creating and co-creating organizations, markets, and supporting ecosystems. Entrepreneurs are only one actor following a limited set of roles and dependent upon many other factors to thrive. As a consequence, entrepreneurs and relevant authorities should be aware of the other actors and factors with which they engage and rely, and make strategic choices to achieve both self and also collective objectives. The study uses stratified random sampling method to collect survey data from 12 different regions in regional and metropolitan regions of NSW, Australia. A questionnaire was administered online among 512 Small and medium enterprise owners operating their business in selected 12 regions in NSW, Australia. Data were analyzed using descriptive analyzing techniques and partial least squares - structural equation modeling. The findings show that even though there is a significant relationship between each and every entrepreneurial ecosystem factors, there is a weak relationship between most entrepreneurial ecosystem factors and entrepreneurial cognition. In the metropolitan context, the availability of finance and informal networks have the largest impact on entrepreneurial cognition while culture, infrastructure, and support services having the smallest impact and the talent pool and universities having a moderate impact on entrepreneurial cognition. Interestingly, in a regional context, culture, availability of finance, and the talent pool have the highest impact on entrepreneurial cognition, while informal networks having the smallest impact and the remaining factors – infrastructure, universities, and support services have a moderate impact on entrepreneurial cognition. These findings suggest the need for a location-specific strategy for supporting the development of entrepreneurial cognition.

Keywords: academic achievement, colour response card, feedback

Procedia PDF Downloads 129
348 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 127
347 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 384