Search results for: pulse-on time
1986 An Exploratory Study on the Effect of a Fermented Dairy Product on Self-Reported Gut Complaints in US Recreational Athletes
Authors: Kersch-Counet C., Fransen K. H. S., Broyd M., Nyakayiru J. D. O. A., Schoemaker M. H., Mallee L. F., Bovee-Oudenhoven I. M. J.
Abstract:
Background: Around one third of people, including athletes, suffer from feelings of gut discomfort. Fermentation of dairy is a process that has been associated with products that can improve gut health. However, insight in (potential) health benefits of most fermented foods is limited to chemical analyses and in-vitro models. Objective: The aim of this open-label, single-arm explorative trial was to investigate in a real life setting the effect of consumption of a fermented whey product for 3 weeks on self-perceived physical and mental wellbeing and digestive issues in 150 US recreational athletes (20-50 years of age) with self-reported gut complaints at enrolment. Methods: Participants living at the West-Coast of the US received for 3 weeks a daily powder of 15 g of BiotisTM Fermentis to be mixed in water using a supplied shaker. Weekly questionnaires were conducted by MMR research to study the effect on physical/mental health issues and self-perceived gut complaints. Non-parametric tests (e.g., Friedman test) were used to assess statistical differences over time while the Kruskal-Wallis and Wilcoxon signed-rank tests were used for sub-groups analysis. Results: Bloating, stress and anxiety were the top 3 issues of the US recreational athletes. Satisfaction of physical wellbeing increased significantly throughout the 3-weeks of fermented whey product consumption (p<0.0005). Combined digestive issues decreased significantly after 2- and 3-weeks of product consumption, with bloating showing a significant reduction (p<0.05). There was a trend that self-reported stress levels reduced after 3 weeks and participants said to significantly feel more active, energetic, and vital (p<0.05). Subgroup analysis showed that gender and habitual protein supplement consumption were associated with specific health issues and modulated the response to the fermented dairy product. Conclusion: Daily consumption of the fermented BiotisTM Fermentis product is associated with a reduction in self-perceived gastrointestinal symptoms and improved overall wellbeing and mood state in US recreational athletes. This large nutrition and health consumer study brings valuable insights in self-reported gut complaints of recreational athletes in the US and their response to a fermented dairy product. A controlled clinical trial in a targeted population is recommended to scientifically substantiate the product effect as observed in this explorative study.Keywords: real-life study, digestive health, fermented whey, sports
Procedia PDF Downloads 2691985 Application of Nuclear Magnetic Resonance (1H-NMR) in the Analysis of Catalytic Aquathermolysis: Colombian Heavy Oil Case
Authors: Paola Leon, Hugo Garcia, Adan Leon, Samuel Munoz
Abstract:
The enhanced oil recovery by steam injection was considered a process that only generated physical recovery mechanisms. However, there is evidence of the occurrence of a series of chemical reactions, which are called aquathermolysis, which generates hydrogen sulfide, carbon dioxide, methane, and lower molecular weight hydrocarbons. These reactions can be favored by the addition of a catalyst during steam injection; in this way, it is possible to generate the original oil in situ upgrading through the production increase of molecules of lower molecular weight. This additional effect could increase the oil recovery factor and reduce costs in transport and refining stages. Therefore, this research has focused on the experimental evaluation of the catalytic aquathermolysis on a Colombian heavy oil with 12,8°API. The effects of three different catalysts, reaction time, and temperature were evaluated in a batch microreactor. The changes in the Colombian heavy oil were quantified through nuclear magnetic resonance 1H-NMR. The relaxation times interpretation and the absorption intensity allowed to identify the distribution of the functional groups in the base oil and upgraded oils. Additionally, the average number of aliphatic carbons in alkyl chains, the number of substituted rings, and the aromaticity factor were established as average structural parameters in order to simplify the samples' compositional analysis. The first experimental stage proved that each catalyst develops a different reaction mechanism. The aromaticity factor has an increasing order of the salts used: Mo > Fe > Ni. However, the upgraded oil obtained with iron naphthenate tends to form a higher content of mono-aromatic and lower content of poly-aromatic compounds. On the other hand, the results obtained from the second phase of experiments suggest that the upgraded oils have a smaller difference in the length of alkyl chains in the range of 240º to 270°C. This parameter has lower values at 300°C, which indicates that the alkylation or cleavage reactions of alkyl chains govern at higher reaction temperatures. The presence of condensation reactions is supported by the behavior of the aromaticity factor and the bridge carbons production between aromatic rings (RCH₂). Finally, it is observed that there is a greater dispersion in the aliphatic hydrogens, which indicates that the alkyl chains have a greater reactivity compared to the aromatic structures.Keywords: catalyst, upgrading, aquathermolysis, steam
Procedia PDF Downloads 1101984 The Facilitatory Effect of Phonological Priming on Visual Word Recognition in Arabic as a Function of Lexicality and Overlap Positions
Authors: Ali Al Moussaoui
Abstract:
An experiment was designed to assess the performance of 24 Lebanese adults (mean age 29:5 years) in a lexical decision making (LDM) task to find out how the facilitatory effect of phonological priming (PP) affects the speed of visual word recognition in Arabic as lexicality (wordhood) and phonological overlap positions (POP) vary. The experiment falls in line with previous research on phonological priming in the light of the cohort theory and in relation to visual word recognition. The experiment also departs from the research on the Arabic language in which the importance of the consonantal root as a distinct morphological unit is confirmed. Based on previous research, it is hypothesized that (1) PP has a facilitating effect in LDM with words but not with nonwords and (2) final phonological overlap between the prime and the target is more facilitatory than initial overlap. An LDM task was programmed on PsychoPy application. Participants had to decide if a target (e.g., bayn ‘between’) preceded by a prime (e.g., bayt ‘house’) is a word or not. There were 4 conditions: no PP (NP), nonwords priming nonwords (NN), nonwords priming words (NW), and words priming words (WW). The conditions were simultaneously controlled for word length, wordhood, and POP. The interstimulus interval was 700 ms. Within the PP conditions, POP was controlled for in which there were 3 overlap positions between the primes and the targets: initial (e.g., asad ‘lion’ and asaf ‘sorrow’), final (e.g., kattab ‘cause to write’ 2sg-mas and rattab ‘organize’ 2sg-mas), or two-segmented (e.g., namle ‘ant’ and naħle ‘bee’). There were 96 trials, 24 in each condition, using a within-subject design. The results show that concerning (1), the highest average reaction time (RT) is that in NN, followed firstly by NW and finally by WW. There is statistical significance only between the pairs NN-NW and NN-WW. Regarding (2), the shortest RT is that in the two-segmented overlap condition, followed by the final POP in the first place and the initial POP in the last place. The difference between the two-segmented and the initial overlap is significant, while other pairwise comparisons are not. Based on these results, PP emerges as a facilitatory phenomenon that is highly sensitive to lexicality and POP. While PP can have a facilitating effect under lexicality, it shows no facilitation in its absence, which intersects with several previous findings. Participants are found to be more sensitive to the final phonological overlap than the initial overlap, which also coincides with a body of earlier literature. The results contradict the cohort theory’s stress on the onset overlap position and, instead, give more weight to final overlap, and even heavier weight to the two-segmented one. In conclusion, this study confirms the facilitating effect of PP with words but not when stimuli (at least the primes and at most both the primes and targets) are nonwords. It also shows that the two-segmented priming is the most influential in LDM in Arabic.Keywords: lexicality, phonological overlap positions, phonological priming, visual word recognition
Procedia PDF Downloads 1851983 Response of Caldeira De Tróia Saltmarsh to Sea Level Rise, Sado Estuary, Portugal
Authors: A. G. Cunha, M. Inácio, M. C. Freitas, C. Antunes, T. Silva, C. Andrade, V. Lopes
Abstract:
Saltmarshes are essential ecosystems both from an ecological and biological point of view. Furthermore, they constitute an important social niche, providing valuable economic and protection functions. Thus, understanding their rates and patterns of sedimentation is critical for functional management and rehabilitation, especially in an SLR scenario. The Sado estuary is located 40 km south of Lisbon. It is a bar built estuary, separated from the sea by a large sand spit: the Tróia barrier. Caldeira de Tróia is located on the free edge of this barrier, and encompasses a salt marsh with ca. 21,000 m². Sediment cores were collected in the high and low marshes and in the mudflat area of the North bank of Caldeira de Tróia. From the low marsh core, fifteen samples were chosen for ²¹⁰Pb and ¹³⁷Cs determination at University of Geneva. The cores from the high marsh and the mudflat are still being analyzed. A sedimentation rate of 2.96 mm/year was derived from ²¹⁰Pb using the Constant Flux Constant Sedimentation model. The ¹³⁷Cs profile shows a peak in activity (1963) between 15.50 and 18.50 cm, giving a 3.1 mm/year sedimentation rate for the past 53 years. The adopted sea level rise scenario was based on a model built with the initial rate of SLR of 2.1 mm/year in 2000 and an acceleration of 0.08 mm/year². Based on the harmonic analysis of Setubal-Tróia tide gauge of 2005 data, the tide model was estimated and used to build the tidal tables to the period 2000-2016. With these tables, the average mean water levels were determined for the same time span. A digital terrain model was created from LIDAR scanning with 2m horizontal resolution (APA-DGT, 2011) and validated with altimetric data obtained with a DGPS-RTK. The response model calculates a new elevation for each pixel of the DTM for 2050 and 2100 based on the sedimentation rates specific of each environment. At this stage, theoretical values were chosen for the high marsh and the mudflat (respectively, equal and double the low marsh rate – 2.92 mm/year). These values will be rectified once sedimentation rates are determined for the other environments. For both projections, the total surface of the marsh decreases: 2% in 2050 and 61% in 2100. Additionally, the high marsh coverage diminishes significantly, indicating a regression in terms of maturity.Keywords: ¹³⁷Cs, ²¹⁰Pb, saltmarsh, sea level rise, response model
Procedia PDF Downloads 2501982 The Spatial Analysis of Wetland Ecosystem Services Valuation on Flood Protection in Tone River Basin
Authors: Tingting Song
Abstract:
Wetlands are significant ecosystems that provide a variety of ecosystem services for humans, such as, providing water and food resources, purifying water quality, regulating climate, protecting biodiversity, and providing cultural, recreational, and educational resources. Wetlands also provide benefits, such as reduction of flood, storm damage, and soil erosion. The flood protection ecosystem services of wetlands are often ignored. Due to climate change, the flood caused by extreme weather in recent years occur frequently. Flood has a great impact on people's production and life with more and more economic losses. This study area is in the Tone river basin in the Kanto area, Japan. It is the second-longest river with the largest basin area in Japan, and it is still suffering heavy economic losses from floods. Tone river basin is one of the rivers that provide water for Tokyo and has an important impact on economic activities in Japan. The purpose of this study was to investigate land-use changes of wetlands in the Tone River Basin, and whether there are spatial differences in the value of wetland functions in mitigating economic losses caused by floods. This study analyzed the land-use change of wetland in Tone River, based on the Landsat data from 1980 to 2020. Combined with flood economic loss, wetland area, GDP, population density, and other social-economic data, a geospatial weighted regression model was constructed to analyze the spatial difference of wetland ecosystem service value. Now, flood protection mainly relies on such a hard project of dam and reservoir, but excessive dependence on hard engineering will cause the government huge financial pressure and have a big impact on the ecological environment. However, natural wetlands can also play a role in flood management, at the same time they can also provide diverse ecosystem services. Moreover, the construction and maintenance cost of natural wetlands is lower than that of hard engineering. Although it is not easy to say which is more effective in terms of flood management. When the marginal value of a wetland is greater than the economic loss caused by flood per unit area, it may be considered to rely on the flood storage capacity of the wetland to reduce the impact of the flood. It can promote the sustainable development of wetlands ecosystem. On the other hand, spatial analysis of wetland values can provide a more effective strategy for flood management in the Tone river basin.Keywords: wetland, geospatial weighted regression, ecosystem services, environment valuation
Procedia PDF Downloads 1011981 Walking across the Government of Egypt: A Single Country Comparative Study of the Past and Current Condition of the Government of Egypt
Authors: Homyr L. Garcia, Jr., Anne Margaret A. Rendon, Carla Michaela B. Taguinod
Abstract:
Nothing is constant in this world but change. This is the reality wherein a lot of people fail to recognize and maybe, it is because of the fact that some see things that are happening with little value or no value at all until it’s gone. For the past years, Egypt was known for its stable government. It was able to withstand a lot of problems and crisis which challenged their country in ways which can never be imagined. In the present time, it seems like in just a snap of a finger, the said stability vanished and it was immediately replaced by a crisis which resulted to a failure in some parts of their government. In addition, this problem continued to worsen and the current situation of Egypt is just a reflection or a result of it. On the other hand, as the researchers continued to study the reasons why the government of Egypt is unstable, they concluded that there might be a possibility that they will be able to produce ways in which their country could be helped or improved. The instability of the government of Egypt is the product of combining all the problems which affects the lives of the people. Some of the reasons that the researchers found are the following: 1) unending doubts of the people regarding the ruling capacity of elected presidents, 2) removal of President Mohamed Morsi in position, 3) economic crisis, 4) a lot of protests and revolution happened, 5) resignation of the long term President Hosni Mubarak and 6) the office of the President is most likely available only to the chosen successor. Also, according to previous researches, there are two plausible scenarios for the instability of Egypt: 1) a military intervention specifically the Supreme Council of the Armed Forces or SCAF, resulting from a contested succession and 2) an Islamist push for political power which highlights the claim that religion is a hindrance towards the development of their country and government. From the eight possible reasons, the researchers decided that they will be focusing on economic crisis since the instability is more clearly seen in the country’s economy which directly affects the people and the government itself. In addition, they made a hypothesis which states that stable economy is a prerequisite towards a stable government. If they will be able to show how this claim is true by using the Social Autopsy Research Design for the qualitative method and Pearson’s correlation coefficient for the quantitative method, the researchers might be able to produce a proposal on how Egypt can stabilize their government and avoid such problems. Also, the hypothesis will be based from the Rational Action Theory which is a theory for understanding and modeling social and economy as well as individual behavior.Keywords: Pearson’s correlation coefficient, rational action theory, social autopsy research design, supreme council of the armed forces (SCAF)
Procedia PDF Downloads 4091980 Mental Health Awareness and Help Seeking Among Adolescents in Kerala
Authors: Fathima M. A., Milu Maria Anto
Abstract:
Aim: The current study aims to explore the understanding about Mental Health and the likelihood to seek help for mental health problems among adolescents in the state of Kerala (India). Method: A cross sectional exploratory design was used. Samples were selected using convenience sampling. Ninety nine high school and higher secondary school students who had enrolled in the program “Responsible Adolescents (READ)” organized by MKMS Education from Kerala participated in this study. The data for the present study was collected using google forms prior to the commencement of the READ programme. Open-ended questions were used to explore the understanding of participants about mental health, mental health problems, causes of mental health problems and the role of mental health professionals. The likelihood to seek help (from friends, parents, teachers and mental health professionals) for mental health problems was assessed using a visual analogue scale. Further open-ended questions were used to identify what changes in teachers and parents will make them feel more comfortable to approach them when they need help. Content analysis was used to identify themes and coded data was further analyzed using correlation. Results: The results show that students have a fair idea about what Mental Health is. Even though the majority is familiar with the names of mental health disorders, relatively fewer students identify it as irregularity in mental functions such as thoughts, emotions and behaviors. The students tend to attribute symptoms of mental health problems as the cause of mental health problems. Very few students have the understanding that biological variations and adverse childhood experiences are primary causes for the development of mental health problems. Less than half of the students were aware of the role of psychiatrists and psychologists in mental health treatment. The students were more likely to seek help from parents and friends during distress. They had a medium inclination to seek help from mental health professionals and showed even lower likelihood to seek help from teachers. The majority of the students responded that they would be more comfortable approaching teachers if they were more open-minded and approachable as well as non-judgmental and non-dismissive. Conclusion: Findings show that there is inadequate awareness among adolescents about mental health problems and their causes. There is a lack of understanding about the roles of two main mental health professionals which can pose a big hurdle in accessing adequate help from the appropriate professional at the right time. The low likelihood to seek help from teachers for mental health problems is very concerning. The major barriers reported by the students in seeking help from teachers were the judgmental and dismissive approach. The findings throw light on the current level of awareness about mental health and mental health help-seeking, which can be utilized in framing mental health awareness programs for students as well as teachers.Keywords: Mental Health Awareness, Adolescent Mental Health, Help Seeking Behavior, School Mental Health
Procedia PDF Downloads 2681979 Biomedicine, Suffering, and Sacrifice: Myths and Prototypes in Cell and Gene Therapies
Authors: Edison Bicudo
Abstract:
Cell and gene therapies (CGTs) result from the intense manipulation of cells or the use of techniques such as gene editing. They have been increasingly used to tackle rare diseases or conditions of genetic origin, such as cancer. One might expect such a complex scientific field to be dominated by scientific findings and evidence-based explanations. However, people engaged in scientific argumentation also mobilize a range of cognitive operations of which they are not fully aware, in addition to drawing on widely available oral traditions. This paper analyses how experts discussing the potentialities and challenges of CGTs have recourse to a particular kind of prototypical myth. This sociology study, conducted at the University of Sussex (UK), involved interviews with scientists, regulators, and entrepreneurs involved in the development or governance of CGTs. It was observed that these professionals, when voicing their views, sometimes have recourse to narratives where CGTs appear as promising tools for alleviating or curing diseases. This is said to involve much personal, scientific, and financial sacrifice. In his study of traditional narratives, Hogan identified three prototypes: the romantic narrative, moved by the ideal of romantic union; the heroic narrative, moved by the desire for political power; and the sacrificial narrative, where the ideal is plenty, well-being, and health. It is argued here that discourses around CGTs often involve some narratives – or myths – that have a sacrificial nature. In this sense, the development of innovative therapies is depicted as a huge sacrificial endeavor involving biomedical scientists, biotech and pharma companies, and decision-makers. These sacrificial accounts draw on oral traditions and benefit from an emotional intensification that can be easily achieved in stories of serious diseases and physical suffering. Furthermore, these accounts draw on metaphorical understandings where diseases and vectors of diseases are considered enemies or invaders while therapies are framed as shields or protections. In this way, this paper aims to unravel the cognitive underpinnings of contemporary science – and, more specifically, biomedicine – revealing how myths, prototypes, and metaphors are highly operative even when complex reasoning is at stake. At the same time, this paper demonstrates how such hidden cognitive operations underpin the construction of powerful ideological discourses aimed at defending certain ways of developing, disseminating, and governing technologies and therapies.Keywords: cell and gene therapies, myths, prototypes, metaphors
Procedia PDF Downloads 171978 Potentiality of a Community of Practice between Public Schools and the Private Sector for Integrating Sustainable Development into the School Curriculum
Authors: Aiydh Aljeddani, Fran Martin
Abstract:
The critical time in which we live requires rethinking of many potential ways in order to make the concept of sustainability and its principles an integral part of our daily life. One of these potential approaches is how to attract community institutions, such as the private sector, to participate effectively in the sustainability industry by supporting public schools to fulfill their duties. A collaborative community of practice can support this purpose and can provide a flexible framework, which allows the members of the community to participate effectively. This study, conducted in Saudi Arabia, aimed to understand the process of a collaborative community of practice of involving the private sector as a member of this community to integrate the sustainability concept in school activities and projects. This study employed a qualitative methodology to understand this authentic and complex phenomenon. A case study approach, ethnography and some elements of action research were followed in this study. The methods of unstructured interviews, artifacts, observation, and teachers’ field notes were used to collect the data. The participants were three secondary teachers, twelve chief executive officers, and one school administrative officer. Certain contextual conditions, as shown by the data, should be taken into consideration when policy makers and school administrations in Saudi Arabia desire to integrate sustainability into school activities. The first of these was the acknowledgement of the valuable role of the members’ personality, efforts, abilities, and experiences, which played vital roles in integrating sustainability. Second, institutional culture, which was not expected to emerge as an important factor in this study, has a significant role in the integration of sustainability. Credibility among the members of the community towards the integration of the sustainability concept and its principles through school activities is another important condition. Fourth, some chief executive officers’ understanding of Corporate Social Responsibility (CSR) towards contribution to sustainability agenda was shallow and limited and this could impede the successful integration of sustainability. Fifth, a shared understanding between the members of the community about integrating sustainability was a vital condition in the integration process. The study also revealed that the integration of sustainability could not be an ongoing process if implemented in isolation of the other community institutions such as the private sector. The study finally offers a number of recommendations to improve on the current practices and suggests areas for further studies.Keywords: community of practice, public schools, private sector, sustainable development
Procedia PDF Downloads 2081977 A Comparative Study between Japan and the European Union on Software Vulnerability Public Policies
Authors: Stefano Fantin
Abstract:
The present analysis outcomes from the research undertaken in the course of the European-funded project EUNITY, which targets the gaps in research and development on cybersecurity and privacy between Europe and Japan. Under these auspices, the research presents a study on the policy approach of Japan, the EU and a number of Member States of the Union with regard to the handling and discovery of software vulnerabilities, with the aim of identifying methodological differences and similarities. This research builds upon a functional comparative analysis of both public policies and legal instruments from the identified jurisdictions. The result of this analysis is based on semi-structured interviews with EUNITY partners, as well as by the participation of the researcher to a recent report from the Center for EU Policy Study on software vulnerability. The European Union presents a rather fragmented legal framework on software vulnerabilities. The presence of a number of different legislations at the EU level (including Network and Information Security Directive, Critical Infrastructure Directive, Directive on the Attacks at Information Systems and the Proposal for a Cybersecurity Act) with no clear focus on such a subject makes it difficult for both national governments and end-users (software owners, researchers and private citizens) to gain a clear understanding of the Union’s approach. Additionally, the current data protection reform package (general data protection regulation), seems to create legal uncertainty around security research. To date, at the member states level, a few efforts towards transparent practices have been made, namely by the Netherlands, France, and Latvia. This research will explain what policy approach such countries have taken. Japan has started implementing a coordinated vulnerability disclosure policy in 2004. To date, two amendments can be registered on the framework (2014 and 2017). The framework is furthermore complemented by a series of instruments allowing researchers to disclose responsibly any new discovery. However, the policy has started to lose its efficiency due to a significant increase in reports made to the authority in charge. To conclude, the research conducted reveals two asymmetric policy approaches, time-wise and content-wise. The analysis therein will, therefore, conclude with a series of policy recommendations based on the lessons learned from both regions, towards a common approach to the security of European and Japanese markets, industries and citizens.Keywords: cybersecurity, vulnerability, European Union, Japan
Procedia PDF Downloads 1561976 Leadership in the Emergence Paradigm: A Literature Review on the Medusa Principles
Authors: Everard van Kemenade
Abstract:
Many quality improvement activities are planned. Leaders are strongly involved in missions, visions and strategic planning. They use, consciously or unconsciously, the PDCA-cycle, also know as the Deming cycle. After the planning, the plans are carried out and the results or effects are measured. If the results show that the goals in the plan have not been achieved, adjustments are made in the next plan or in the execution of the processes. Then, the cycle is run through again. Traditionally, the PDCA-cycle is advocated as a means to an end. However, PDCA is especially fit for planned, ordered, certain contexts. It fits with the empirical and referential quality paradigm. For uncertain, unordered, unplanned processes, something else might be needed instead of Plan-Do-Check-Act. Due to the complexity of our society, the influence of the context, and the uncertainty in our world nowadays, not every activity can be planned anymore. At the same time organisations need to be more innovative than ever. That provides leaders with ‘wicked tendencies’. However, that raises the question how one can innovate without being able to plan? Complexity science studies the interactions of a diverse group of agents that bring about change in times of uncertainty, e.g. when radical innovation is co-created. This process is called emergence. This research study explores the role of leadership in the emergence paradigm. Aim of the article is to study the way that leadership can support the emergence of innovation in a complex context. First, clarity is given on the concepts used in the research question: complexity, emergence, innovation and leadership. Thereafter a literature search is conducted to answer the research question. The topics ‘emergent leadership’ or ‘complexity leadership’ are chosen for an exploratory search in Google and Google Scholar using the berry picking method. Exclusion criterion is emergence in other disciplines than organizational development or in the meaning of ‘arising’. The literature search conducted gave 45 hits. Twenty-seven articles were excluded after reading the title and abstract because they did not research the topic of emergent leadership and complexity. After reading the remaining articles as a whole one more was excluded because the article used emergent in the limited meaning of ‗arising‘ and eight more were excluded because the topic did not match the research question of this article. That brings the total of the search to 17 articles. The useful conclusions from the articles are merged and grouped together under overarching topics, using thematic analysis. The findings are that 5 topics prevail when looking at possibilities for leadership to facilitate innovation: enabling, sharing values, dreaming, interacting, context sensitivity and adaptivity. Together they form In Dutch the acronym Medusa.Keywords: complexity science, emergence, leadership in the emergence paradigm, innovation, the Medusa principles
Procedia PDF Downloads 291975 Effect of Low Calorie Sweeteners on Chemical, Sensory Evaluation and Antidiabetic of Pumpkin Jam Fortified with Soybean
Authors: Amnah M. A. Alsuhaibani, Amal N. Al-Kuraieef
Abstract:
Introduction: In the recent decades, production of low-calorie jams is needed for diabetics that comprise low calorie fruits and low calorie sweeteners. Object: the research aimed to prepare low calorie formulated pumpkin jams (fructose, stevia and aspartame) incorporated with soy bean and evaluate the jams through chemical analysis and sensory evaluation after storage for six month. Moreover, the possible effect of consumption of low calorie jams on diabetic rats was investigated. Methods: Five formulas of pumpkin jam with different sucrose, fructose, stevia and aspartame sweeteners and soy bean were prepared and stored at 10 oC for six month compared to ordinary pumpkin jam. Chemical composition and sensory evaluation of formulated jams were evaluated at zero time, 3 month and 6 month of storage. The best three acceptable pumpkin jams were taken for biological study on diabetic rats. Rats divided into group (1) served as negative control and streptozotocin induce diabetes four rat groups that were positive diabetic control (group2), rats fed on standard diet with 10% sucrose soybean jam, fructose soybean jam and stevia soybean jam (group 3, 4&5), respectively. Results: The content of protein, fat, ash and fiber were increased but carbohydrate was decreased in low calorie formulated pumpkin jams compared to ordinary jam. Production of aspartame soybean pumpkin jam had lower score of all sensory attributes compared to other jam then followed by stevia soybean Pumpkin jam. Using non nutritive sweeteners (stevia & aspartame) with soybean in processing jam could lower the score of the sensory attributes after storage for 3 and 6 months. The highest score was recorded for sucrose and fructose soybean jams followed by stevia soybean jam while aspartame soybean jam recorded the lowest score significantly. The biological evaluation showed a significant improvement in body weight and FER of rats after six weeks of consumption of standard diet with jams (Group 3,4&5) compared to Group1. Rats consumed 10% low calorie jam with nutrient sweetener (fructose) and non nutrient sweetener (stevia) soybean jam (group 4& 5) showed significant decrease in glucose level, liver function enzymes activity, and liver cholesterol & total lipids in addition of significant increase of insulin and glycogen compared to the levels of group 2. Conclusion: low calorie pumpkin jams can be prepared by low calorie sweeteners and soybean and also storage for 3 months at 10oC without change sensory attributes. Consumption of stevia pumpkin jam fortified with soybean had positive health effects on streptozoticin induced diabetes in rats.Keywords: pumpkin jam, HFCS, aspartame, stevia, storage
Procedia PDF Downloads 1831974 Recognizing Juxtaposition Patterns of the Dwelling Units in Housing Cluster: The Case Study of Aghayan Complex: An Example of Rural Residential Development in Qajar Era in Iran
Authors: Outokesh Fatemeh, Jourabchi Keivan, Talebi Maryam, Nikbakht Fatemeh
Abstract:
Mayamei is a small town in Iran that is located between Shahrud and Sabzevar cities, on the Silk Road. It enjoys a history of approximately 1000 years. An alley entitled ‘Aghayan’ exists in this town that comprises residential buildings of a famous family. Bathhouse, mosque, telegraph center, cistern are all related to this alley. This architectural complex belongs to Sadat Mousavi, who is one of the Mayamei's major grandees and religious household. The alley after construction has been inherited from generation to generation within the family masters. The purpose of this study, which was conducted on Aghayan alley and its associated complex, was to elucidate Iranian vernacular domestic architecture of Qajar era in small towns and villages. We searched for large, medium, and small architectural patterns in the contemplated complex, and tried to elaborate their evolution from past to the present. The other objective of this project was finding a correlation between changes in the lifestyle of the alley’s inhabitants with the form of the building's architecture. Our investigation methods included: literature review especially in regard to historical travelogues, peer site visiting, mapping, interviewing of the elderly people of the Mousavi family (the owners), and examining the available documents especially the 4 meters’ scroll-type testament of 150 years ago. For the analysis of the aforementioned data, an effort was made to discover (1) the patterns of placing of different buildings in respect of the others, (2) finding the relation between function of the buildings with their relative location in the complex, as was considered in the original design, and (3) possible changes of functions of the buildings during the time. In such an investigation, special attention was paid to the chronological changes of lifestyles of the residents. In addition, we tried to take all different activities of the residents into account including their daily life activities, religious ceremonies, etc. By combining such methods, we were able to obtain a picture of the buildings in their original (construction) state, along with a knowledge of the temporal evolution of the architecture. An interesting finding is that the Aghayan complex seems to be a big structure of the horizontal type apartments, which are placed next to each other. The houses made in this way are connected to the adjacent neighbors both by the bifacial rooms and from the roofs.Keywords: Iran, Qajar period, vernacular domestic architecture, life style, residential complex
Procedia PDF Downloads 1611973 A Matched Case-Control Study to Asses the Association of Chikunguynya Severity among Blood Groups and Other Determinants in Tesseney, Gash Barka Zone, Eritrea
Authors: Ghirmay Teklemicheal, Samsom Mehari, Sara Tesfay
Abstract:
Objectives: A total of 1074 suspected chikungunya cases were reported in Tesseney Province, Gash Barka region, Eritrea, during an outbreak. This study was aimed to assess the possible association of chikungunya severity among ABO blood groups and other potential determinants. Methods: A sex-matched and age-matched case-control study was conducted during the outbreak. For each case, one control subject had been selected from the mild Chikungunya cases. Along the same line of argument, a second control subject had also been designated through which neighborhood of cases were analyzed, scrutinized, and appeared to the scheme of comparison. Time is always the most sacrosanct element in pursuance of any study. According to the temporal calculation, this study was pursued from October 15, 2018, to November 15, 2018. Coming to the methodological dependability, calculating odds ratios (ORs) and conditional (fixed-effect) logistic regression methods were being applied. As a consequence of this, the data was analyzed and construed on the basis of the aforementioned methodological systems. Results: In this outbreak, 137 severe suspected chikungunya cases and 137 mild chikungunya suspected patients, and 137 controls free of chikungunya from the neighborhood of cases were analyzed. Non-O individuals compared to those with O blood group indicated as significant with a p-value of 0.002. Separate blood group comparison among A and O blood groups reflected as significant with a p-value of 0.002. However, there was no significant difference in the severity of chikungunya among B, AB, and O blood groups with a p-value of 0.113 and 0.708, respectively, and a strong association of chikungunya severity was found with hypertension and diabetes (p-value of < 0.0001); whereas, there was no association between chikungunya severity and asthma with a p-value of 0.695 and also no association with pregnancy (p-value =0.881), ventilator (p-value =0.181), air conditioner (p-value = 0.247), and didn’t use latrine and pit latrine (p-value = 0.318), among individuals using septic and pit latrine (p-value = 0.567) and also among individuals using flush and pit latrine (p-value = 0.194). Conclusions: Non- O blood groups were found to be at risk more than their counterpart O blood group individuals with severe form of chikungunya disease. By the same token, individuals with chronic disease were more prone to severe forms of the disease in comparison with individuals without chronic disease. Prioritization is recommended for patients with chronic diseases and non-O blood group since they are found to be susceptible to severe chikungunya disease. Identification of human cell surface receptor(s) for CHIKV is quite necessary for further understanding of its pathophysiology in humans. Therefore, molecular and functional studies will necessarily be helpful in disclosing the association of blood group antigens and CHIKV infections.Keywords: Chikungunya, Chikungunya virus, disease outbreaks, case-control studies, Eritrea
Procedia PDF Downloads 1651972 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain
Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki
Abstract:
The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy
Procedia PDF Downloads 741971 Technological Affordances of a Mobile Fitness Application- A Role of Escapism and Social Outcome Expectation
Authors: Inje Cho
Abstract:
The leading health risks threatening the world today are associated with a modern lifestyle characterized by sedentary behavior, stress, anxiety, and an obesogenic food environment. To counter this alarming trend, the Centers for Disease Control and Prevention have proffered Physical Activity guidelines to bolster physical engagement. Concurrently, the burgeon of smartphones and mobile applications has witnessed a proliferation of fitness applications aimed at invigorating exercise adherence and real-time activity monitoring. Grounded in the Uses and gratification theory, this study delves into the technological affordances of mobile fitness applications, discerning the mediating influences of escapism and social outcome expectations on attitudes and exercise intention. The theory explains how individuals employ distinct communication mediums to satiate their exigencies and desires. Technological affordances manifest as attributes of emerging technologies that galvanize personal engagement in physical activities. Several features of mobile fitness applications include affordances for goal setting, virtual rewards, peer support, and exercise information. Escapism, denoting the inclination to disengage from normal routines, has emerged as a salient motivator for the consumption of new media. This study postulates that individual’s perceptions technological affordances within mobile fitness applications, can affect escapism and social outcome expectations, potentially influencing attitude, and behavior formation. Thus, the integrated model has been developed to empirically examine the interrelationships between technological affordances, escapism, social outcome expectations, and exercise intention. Structural Equation Modelling serves as the methodological tool, and a cohort of 400 Fitbit users shall be enlisted from the Prolific, data collection platform. A sequence of multivariate data analyses will scrutinize both the measurement and hypothesized structural models. By delving into the effects of mobile fitness applications, this study contributes to the growing of new media studies in sport management. Moreover, the novel integration of the uses and gratification theory, technological affordances, via the prism of escapism, illustrates the dynamics that underlies mobile fitness user’s attitudes and behavioral intentions. Therefore, the findings from this study contribute to theoretical understanding and provide pragmatic insights to developers and practitioners in optimizing the impact of mobile fitness applications.Keywords: technological affordances, uses and gratification, mobile fitness apps, escapism, physical activity
Procedia PDF Downloads 801970 mHealth-based Diabetes Prevention Program among Mothers with Abdominal Obesity: A Randomized Controlled Trial
Authors: Jia Guo, Qinyuan Huang, Qinyi Zhong, Yanjing Zeng, Yimeng Li, James Wiley, Kin Cheung, Jyu-Lin Chen
Abstract:
Context: Mothers with abdominal obesity, particularly in China, face challenges in managing their health due to family responsibilities. Existing diabetes prevention programs do not cater specifically to this demographic. Research Aim: To assess the feasibility, acceptability, and efficacy of an mHealth-based diabetes prevention program tailored for Chinese mothers with abdominal obesity in reducing weight-related variables and diabetes risk. Methodology: A randomized controlled trial was conducted in Changsha, China, where the mHealth group received personalized modules and health messages, while the control group received general health education. Data were collected at baseline, 3 months, and 6 months. Findings: The mHealth intervention significantly improved waist circumference, modifiable diabetes risk scores, daily steps, self-efficacy for physical activity, social support for physical activity, and physical health satisfaction compared to the control group. However, no differences were found in BMI and certain other variables. Theoretical Importance: The study demonstrates the feasibility and efficacy of a tailored mHealth intervention for Chinese mothers with abdominal obesity, emphasizing the potential for such programs to improve health outcomes in this population. Data Collection: Data on various variables including weight-related measures, diabetes risk scores, behavioral and psychological factors were collected at baseline, 3 months, and 6 months from participants in the mHealth and control groups. Analysis Procedures: Generalized estimating equations were used to analyze the data collected from the mHealth and control groups at different time points during the study period. Question Addressed: The study addressed the effectiveness of an mHealth-based diabetes prevention program tailored for Chinese mothers with abdominal obesity in improving various health outcomes compared to traditional general health education approaches. Conclusion: The tailored mHealth intervention proved to be feasible and effective in improving weight-related variables, physical activity, and physical health satisfaction among Chinese mothers with abdominal obesity, highlighting its potential for delivering diabetes prevention programs to this population.Keywords: type 2 diabetes, mHealth, obesity, prevention, mothers
Procedia PDF Downloads 571969 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range
Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva
Abstract:
Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability
Procedia PDF Downloads 1561968 Contribution of the Corn Milling Industry to a Global and Circular Economy
Authors: A. B. Moldes, X. Vecino, L. Rodriguez-López, J. M. Dominguez, J. M. Cruz
Abstract:
The concept of the circular economy is focus on the importance of providing goods and services sustainably. Thus, in a future it will be necessary to respond to the environmental contamination and to the use of renewables substrates by moving to a more restorative economic system that drives towards the utilization and revalorization of residues to obtain valuable products. During its evolution our industrial economy has hardly moved through one major characteristic, established in the early days of industrialization, based on a linear model of resource consumption. However, this industrial consumption system will not be maintained during long time. On the other hand, there are many industries, like the corn milling industry, that although does not consume high amount of non renewable substrates, they produce valuable streams that treated accurately, they could provide additional, economical and environmental, benefits by the extraction of interesting commercial renewable products, that can replace some of the substances obtained by chemical synthesis, using non renewable substrates. From this point of view, the use of streams from corn milling industry to obtain surface-active compounds will decrease the utilization of non-renewables sources for obtaining this kind of compounds, contributing to a circular and global economy. However, the success of the circular economy depends on the interest of the industrial sectors in the revalorization of their streams by developing relevant and new business models. Thus, it is necessary to invest in the research of new alternatives that reduce the consumption of non-renewable substrates. In this study is proposed the utilization of a corn milling industry stream to obtain an extract with surfactant capacity. Once the biosurfactant is extracted, the corn milling stream can be commercialized as nutritional media in biotechnological process or as animal feed supplement. Usually this stream is combined with other ingredients obtaining a product namely corn gluten feed or may be sold separately as a liquid protein source for beef and dairy feeding, or as a nutritional pellet binder. Following the productive scheme proposed in this work, the corn milling industry will obtain a biosurfactant extract that could be incorporated in its productive process replacing those chemical detergents, used in some point of its productive chain, or it could be commercialized as a new product of the corn manufacture. The biosurfactants obtained from corn milling industry could replace the chemical surfactants in many formulations, and uses, and it supposes an example of the potential that many industrial streams could offer for obtaining valuable products when they are manage properly.Keywords: biosurfactantes, circular economy, corn, sustainability
Procedia PDF Downloads 2611967 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation
Procedia PDF Downloads 2451966 Passing-On Cultural Heritage Knowledge: Entrepreneurial Approaches for a Higher Educational Sustainability
Authors: Ioana Simina Frincu
Abstract:
As institutional initiatives often fail to provide good practices when it comes to heritage management or to adapt to the changing environment in which they function and to the audiences they address, private actions represent viable strategies for sustainable knowledge acquisition. Information dissemination to future generations is one of the key aspects in preserving cultural heritage and is successfully feasible even in the absence of original artifacts. Combined with the (re)discovery of natural landscape, open-air exploratory approaches (archeoparks) versus an enclosed monodisciplinary rigid framework (traditional museums) are more likely to 'speak the language' of a larger number of people, belonging to a variety of categories, ages, and professions. Interactive sites are efficient ways of stimulating heritage awareness and increasing the number of visitors of non-interactive/static cultural institutions owning original pieces of history, delivering specialized information, and making continuous efforts to preserve historical evidence (relics, manuscripts, etc.). It is high time entrepreneurs took over the role of promoting cultural heritage, bet it under a more commercial yet more attractive form (business). Inclusive, participatory type of activities conceived by experts from different domains/fields (history, anthropology, tourism, sociology, business management, integrative sustainability, etc.) have better chances to ensure long term cultural benefits for both adults and children, especially when and where the educational discourse fails. These unique self-experience leisure activities, which offer everyone the opportunity to recreate history by him-/her-self, to relive the ancestors’ way of living, surviving and exploring should be regarded not as pseudo-scientific approaches but as important pre-steps to museum experiences. In order to support this theory, focus will be laid on two different examples: one dynamic, in the outdoors (the Boario Terme Archeopark from Italy) and one experimental, held indoor (the reconstruction of the Neolithic sanctuary of Parta, Romania as part of a transdisciplinary academic course) and their impact on young generations. The conclusion of this study shows that the increasingly lower engagement of youth (students) in discovering and understanding history, archaeology, and heritage can be revived by entrepreneurial projects.Keywords: archeopark, educational tourism, open air museum, Parta sanctuary, prehistory
Procedia PDF Downloads 1391965 Platelet Volume Indices: Emerging Markers of Diabetic Thrombocytopathy
Authors: Mitakshara Sharma, S. K. Nema
Abstract:
Diabetes mellitus (DM) is metabolic disorder prevalent in pandemic proportions, incurring significant morbidity and mortality due to associated vascular angiopathies. Platelet related thrombogenesis plays key role in pathogenesis of these complications. Most patients with type II DM suffer from preventable vascular complications and early diagnosis can help manage these successfully. These complications are attributed to platelet activation which can be recognised by the increase in Platelet Volume Indices(PVI) viz. Mean Platelet Volume(MPV) and Platelet Distribution Width(PDW). This study was undertaken with the aim of finding a relationship between PVI and vascular complications of Diabetes mellitus, their importance as a causal factor in these complications and use as markers for early detection of impending vascular complications in patients with poor glycaemic status. This is a cross-sectional study conducted for 2 years with total 930 subjects. The subjects were segregated in 03 groups on basis of glycosylated haemoglobin (HbA1C) as: - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose(IFG) with 300 individuals in IFG and non-diabetic group & 330 individuals in diabetic group. The diabetic group was further divided into two groups: - (a) Diabetic subjects with diabetes related vascular complications (b) Diabetic subjects without diabetes related vascular complications. Samples for HbA1C and platelet indices were collected using Ethylene diamine tetracetic acid(EDTA) as anticoagulant and processed on SYSMEX-XS-800i autoanalyser. The study revealed stepwise increase in PVI from non-diabetics to IFG to diabetics. MPV and PDW of diabetics, IFG and non diabetics were 17.60 ± 2.04, 11.76 ± 0.73, 9.93 ± 0.64 and 19.17 ± 1.48, 15.49 ± 0.67, 10.59 ± 0.67 respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). MPV & PDW of subjects with and without diabetes related complications were (15.14 ± 1.04) fl & (17.51±0.39) fl and (18.96 ± 0.83) fl & (20.09 ± 0.98) fl respectively with a significant p value 0.00.The current study demonstrates raised platelet indices & reduced platelet counts in association with rising glycaemic levels and diabetes related vascular complications across various study groups & showed that platelet morphology is altered with increasing glycaemic levels. These changes can be known by measurements of PVI which are important, simple, cost effective, effortless tool & indicators of impending vascular complications in patients with deranged glycaemic control. PVI should be researched and explored further as surrogate markers to develop a clinical tool for early recognition of vascular changes related to diabetes and thereby help prevent them. They can prove to be more useful in developing countries with limited resources. This study is multi-parameter, comprehensive with adequately powered study design and represents pioneering effort in India on account of the fact that both Platelet indices (MPV & PDW) along with platelet count have been evaluated together for the first time in Diabetics, non diabetics, patients with IFG and also in the diabetic patients with and without diabetes related vascular complications.Keywords: diabetes, HbA1C, IFG, MPV, PDW, PVI
Procedia PDF Downloads 2391964 Quality Care from the Perception of the Patient in Ambulatory Cancer Services: A Qualitative Study
Authors: Herlin Vallejo, Jhon Osorio
Abstract:
Quality is a concept that has gained importance in different scenarios over time, especially in the area of health. The nursing staff is one of the actors that contributes most to the care process and the satisfaction of the users in the evaluation of quality. However, until now, there are few tools to measure the quality of care in specialized performance scenarios. Patients receiving ambulatory cancer treatments can face various problems, which can increase their level of distress, so improving the quality of outpatient care for cancer patients should be a priority for oncology nursing. The experience of the patient in relation to the care in these services has been little investigated. The purpose of this study was to understand the perception that patients have about quality care in outpatient chemotherapy services. A qualitative, exploratory, descriptive study was carried out in 9 patients older than 18 years, diagnosed with cancer, who were treated at the Institute of Cancerology, in outpatient chemotherapy rooms, with a minimum of three months of treatment with curative intention and which had given your informed consent. The total of participants was determined by the theoretical saturation, and the selection of these was for convenience. Unstructured interviews were conducted, recorded and transcribed. The analysis of the information was done under the technique of content analysis. Three categories emerged that reflect the perception that patients have regarding quality care: patient-centered care, care with love and effects of care. Patients highlighted situations that show that care is centered on them, incorporating elements of patient-centered care from the institutional, infrastructure, qualities of care and what for them, in contrast, means inappropriate care. Care with love as a perception of quality care means for patients that the nursing staff must have certain qualities, perceive caring with love as a family affair, limits on care with love and the nurse-patient relationship. Quality care has effects on both the patient and the nursing staff. One of the most relevant effects was the confidence that the patient develops towards the nurse, besides to transform the unreal images about cancer treatment with chemotherapy. On the other hand, care with quality generates a commitment to self-care and is a facilitator in the transit of oncological disease and chemotherapeutic treatment, but from the perception of a healing transit. It is concluded that care with quality from the perception of patients, is a construction that goes beyond the structural issues and is related to an institutional culture of quality that is reflected in the attitude of the nursing staff and in the acts of Care that have positive effects on the experience of chemotherapy and disease. With the results, it contributes to better understand how quality care is built from the perception of patients and to open a range of possibilities for the future development of an individualized instrument that allows evaluating the quality of care from the perception of patients with cancer.Keywords: nursing care, oncology service hospital, quality management, qualitative studies
Procedia PDF Downloads 1371963 Perceived Restorativeness Scale– 6: A Short Version of the Perceived Restorativeness Scale for Mixed (or Mobile) Devices
Authors: Sara Gallo, Margherita Pasini, Margherita Brondino, Daniela Raccanello, Roberto Burro, Elisa Menardo
Abstract:
Most of the studies on the ability of environments to recover people’s cognitive resources have been conducted in laboratory using simulated environments (e.g., photographs, videos, or virtual reality), based on the implicit assumption that exposure to simulated environments has the same effects of exposure to real environments. However, the technical characteristics of simulated environments, such as the dynamic or static characteristics of the stimulus, critically affect their perception. Measuring perceived restorativeness in situ rather than in laboratory could increase the validity of the obtained measurements. Personal mobile devices could be useful because they allow accessing immediately online surveys when people are directly exposed to an environment. At the same time, it becomes important to develop short and reliable measuring instruments that allow a quick assessment of the restorative qualities of the environments. One of the frequently used self-report measures to assess perceived restorativeness is the “Perceived Restorativeness Scale” (PRS) based on Attention Restoration Theory. A lot of different versions have been proposed and used according to different research purposes and needs, without studying their validity. This longitudinal study reported some preliminary validation analyses on a short version of original scale, the PRS-6, developed to be quick and mobile-friendly. It is composed of 6 items assessing fascination and being-away. 102 Italian university students participated to the study, 84% female with age ranging from 18 to 47 (M = 20.7; SD = 2.9). Data were obtained through a survey online that asked them to report their perceived restorativeness of the environment they were in (and the kind of environment) and their positive emotion (Positive and Negative Affective Schedule, PANAS) once a day for seven days. Cronbach alpha and item-total correlations were used to assess reliability and internal consistency. Confirmatory Factor Analyses (CFA) models were run to study the factorial structure (construct validity). Correlation analyses between PRS and PANAS scores were used to check discriminant validity. In the end, multigroup CFA models were used to study measurement invariance (configural, metric, scalar, strict) between different mobile devices and between day of assessment. On the whole, the PRS-6 showed good psychometric proprieties, similar to those of the original scale, and invariance across devices and days. These results suggested that the PRS-6 could be a valid alternative to assess perceived restorativeness when researchers need a brief and immediate evaluation of the recovery quality of an environment.Keywords: restorativeness, validation, short scale development, psychometrics proprieties
Procedia PDF Downloads 2531962 An Investigation into Enablers and Barriers of Reverse Technology Transfer
Authors: Nirmal Kundu, Chandan Bhar, Visveswaran Pandurangan
Abstract:
Technology is the most valued possession for a country or an organization. The economic development depends not on stock of technology but on the capabilities how the technology is being exploited. The technology transfer is the best way how the developing countries have an access to state-of- the-art technology. Traditional technology transfer is a unidirectional phenomenon where technology is transferred from developed to developing countries. But now there is a change of wind. There is a general agreement that global shift of economic power is under way from west to east. As China and India are making the transition from users to producers, and producers to innovators, this has increasing important implications on economy, technology and policy of global trade. As a result, Reverse technology transfer has become a phenomenon and field of study in technology management. The term “Reverse Technology Transfer” is not well defined. Initially the concept of Reverse technology transfer was associated with the phenomenon of “Brain drain” from developing to developed countries. In the second phase, Reverse Technology Transfer was associated with the transfer of knowledge and technology from subsidiaries to multinationals. Finally, time has come now to extend the concept of reverse technology transfer to two different organizations or countries related or unrelated by traditional technology transfer but the transfer or has essentially received the technology through traditional mode of technology transfer. The objective of this paper is to study; 1) the present status of Reverse technology transfer, 2) the factors which are the enablers and barriers of Reverse technology transfer and 3) how the reverse technology transfer strategy can be integrated in the technology policy of a country which will give the countries an economic boost. The research methodology used in this study is a combination of literature review, case studies and key informant interviews. The literature review includes both published as well as unpublished sources of literature. In case study, attempt has been made to study the records of reverse technology transfer that have been occurred in developing countries. In case of key informant interviews, informal telephonic discussions have been carried out with the key executives of the organizations (industry, university and research institutions) who are actively engaged in the process of technology transfer- traditional as well as reverse. Reverse technology transfer is possible only by creating technological capabilities. Following four important enablers coupled with government active and aggressive action can help to build technology base to reach to the goal of Reverse technology transfer 1) Imitation to innovation, 2) Reverse engineering, 3) Collaborative R & D approach, and 4) Preventing reverse brain drain. The barriers that come in the way are the mindset of over dependence, over subordination and parent–child attitude (not adult attitude). Exploitation of these enablers and overcoming the barriers of reverse technology transfer, the developing countries like India and China can prove that going “reverse” is the best way to move forward and again establish themselves as leader of the future world.Keywords: barriers of reverse technology transfer, enablers of reverse technology transfer, knowledge transfer, reverse technology transfer, technology transfer
Procedia PDF Downloads 3991961 Colorful Ethnoreligious Map of Iraq and the Current Situation of Minorities in the Country
Authors: Meszár Tárik
Abstract:
The aim of the study is to introduce the minority groups living in Iraq and to shed light on their current situation. The Middle East is a rather heterogeneous region in ethnic terms. It includes many ethnic, national, religious, linguistic, or ethnoreligious groups. The relationship between the majority and minority is the main cause of various conflicts in the region. It seems that most of the post-Ottoman states have not yet developed a unified national identity capable of integrating their multi-ethnic societies. The issue of minorities living in the Middle East is highly politicized and controversial, as the various Arab states consider the treatment of minorities as their internal affair, do not recognize discrimination or even deny the existence of any kind of minorities on their territory. This attitude of the Middle Eastern states may also be due to the fact that the minority issue can be abused and can serve as a reference point for the intervention policies of Western countries at any time. Methodologically, the challenges of these groups are perceived through the manifestos of prominent individuals and organizations belonging to minorities. The basic aim is to present the minorities’ own history in dealing with the issue. It also introduces the different ethnic and religious minorities in Iraq and analyzes their situation during the operation of the terrorist organization „Islamic State” and in the aftermath. It is clear that the situation of these communities deteriorated significantly with the advance of ISIS, but it is also clear that even after the expulsion of the militant group, we cannot necessarily report an improvement in this area, especially in terms of the ability of minorities to assert their interests and physical security. The emergence of armed militias involved in the expulsion of ISIS sometimes has extremely negative effects on them. Until the interests of non-Muslims are adequately represented at the local level and in the legislature, most experts and advocates believe that little will change in their situation. When conflicts flare, many Iraqi citizens usually leave Iraq, but because of the poor public security situation (threats from terrorist organizations, interventions by other countries), emigration causes serious problems not only outside the country’s borders but also within the country. Another ominous implication for minorities is that their communities are very slow if ever, to return to their homes after fleeing their own settlements. An important finding of the study is that this phenomenon is changing the face of traditional Iraqi settlements and threatens to plunge groups that have lived there for thousands of years into the abyss of history. Therefore, we not only present the current situation of minorities living in Iraq but also discuss their future possibilities.Keywords: Middle East, Iraq, Islamic State, minorities
Procedia PDF Downloads 871960 Protection of Victims’ Rights in International Criminal Proceedings
Authors: Irina Belozerova
Abstract:
In the recent years, the number of crimes against peace and humanity has constantly been increasing. The development of the international community is inseparably connected to the compliance with the law which protects the rights and interests of citizens in all of their manifestations. The provisions of the law of criminal procedure are no exception. The rights of the victims of genocide, of the war crimes and the crimes against humanity, require particular attention. These crimes fall within the jurisdiction of the International Criminal Court governed by the Rome Statute of the International Criminal Court. These crimes have the following features. First, any such crime has a mass character and therefore requires specific regulation in the international criminal law and procedure and the national criminal law and procedure of different countries. Second, the victims of such crimes are usually children, women and old people; the entire national, ethnic, racial or religious groups are destroyed. These features influence the classification of victims by the age criterion. Article 68 of the Rome Statute provides for protection of the safety, physical and psychological well-being, dignity and privacy of victims and witnesses and thus determines the procedural status of these persons. However, not all the persons whose rights have been violated by the commission of these crimes acquire the status of victims. This is due to the fact that such crimes affect a huge number of persons and it is impossible to mention them all by name. It is also difficult to assess the entire damage suffered by the victims. While assessing the amount of damages it is essential to take into account physical and moral harm, as well as property damage. The procedural status of victims thus gains an exclusive character. In order to determine the full extent of the damage suffered by the victims it is necessary to collect sufficient evidence. However, it is extremely difficult to collect the evidence that would ensure the full and objective protection of the victims’ rights. While making requests for the collection of evidence, the International Criminal Court faces the problem of protection of national security information. Religious beliefs and the family life of victims are of great importance. In some Islamic countries, it is impossible to question a woman without her husband’s consent which affects the objectivity of her testimony. Finally, the number of victims is quantified by hundreds and thousands. The assessment of these elements demands time and highly qualified work. These factors justify the creation of a mechanism that would help to collect the evidence and establish the truth in the international criminal proceedings. This mechanism will help to impose a just and appropriate punishment for the persons accused of having committed a crime, since, committing the crime, criminals could not misunderstand the outcome of their criminal intent.Keywords: crimes against humanity, evidence in international criminal proceedings, international criminal proceedings, protection of victims
Procedia PDF Downloads 2491959 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 791958 Integrated Passive Cooling Systems for Tropical Residential Buildings: A Review through the Lens of Latent Heat Assessment
Authors: O. Eso, M. Mohammadi, J. Darkwa, J. Calautit
Abstract:
Residential buildings are responsible for 22% of the global end-use energy demand and 17% of global CO₂ emissions. Tropical climates particularly present higher latent heat gains, leading to more cooling loads. However, the cooling processes are all based on conventional mechanical air conditioning systems which are energy and carbon intensive technologies. Passive cooling systems have in the past been considered as alternative technologies for minimizing energy consumption in buildings. Nevertheless, replacing mechanical cooling systems with passive ones will require a careful assessment of the passive cooling system heat transfer to determine if suitable to outperform their conventional counterparts. This is because internal heat gains, indoor-outdoor heat transfer, and heat transfer through envelope affects the performance of passive cooling systems. While many studies have investigated sensible heat transfer in passive cooling systems, not many studies have focused on their latent heat transfer capabilities. Furthermore, combining heat prevention, heat modulation and heat dissipation to passively cool indoor spaces in the tropical climates is critical to achieve thermal comfort. Since passive cooling systems use only one of these three approaches at a time, integrating more than one passive cooling system for effective indoor latent heat removal while still saving energy is studied. This study is a systematic review of recently published peer review journals on integrated passive cooling systems for tropical residential buildings. The missing links in the experimental and numerical studies with regards to latent heat reduction interventions are presented. Energy simulation studies of integrated passive cooling systems in tropical residential buildings are also discussed. The review has shown that comfortable indoor environment is attainable when two or more passive cooling systems are integrated in tropical residential buildings. Improvement occurs in the heat transfer rate and cooling performance of the passive cooling systems when thermal energy storage systems like phase change materials are included. Integrating passive cooling systems in tropical residential buildings can reduce energy consumption by 6-87% while achieving up to 17.55% reduction in indoor heat flux. The review has highlighted a lack of numerical studies regarding passive cooling system performance in tropical savannah climates. In addition, detailed studies are required to establish suitable latent heat transfer rate in passive cooling ventilation devices under this climate category. This should be considered in subsequent studies. The conclusions and outcomes of this study will help researchers understand the overall energy performance of integrated passive cooling systems in tropical climates and help them identify and design suitable climate specific options for residential buildings.Keywords: energy savings, latent heat, passive cooling systems, residential buildings, tropical residential buildings
Procedia PDF Downloads 1491957 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material
Authors: H. M. Alfrihidi, H.A. Albarakaty
Abstract:
Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.Keywords: flattening filter free, monte carlo, radiotherapy, surface dose
Procedia PDF Downloads 73