Search results for: change frequency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10620

Search results for: change frequency

2010 Molecular Detection of mRNA bcr-abl and Circulating Leukemic Stem Cells CD34+ in Patients with Acute Lymphoblastic Leukemia and Chronic Myeloid Leukemia and Its Association with Clinical Parameters

Authors: B. Gonzalez-Yebra, H. Barajas, P. Palomares, M. Hernandez, O. Torres, M. Ayala, A. L. González, G. Vazquez-Ortiz, M. L. Guzman

Abstract:

Leukemia arises by molecular alterations of the normal hematopoietic stem cell (HSC) transforming it into a leukemic stem cell (LSC) with high cell proliferation, self-renewal, and cell differentiation. Chronic myeloid leukemia (CML) originates from an LSC-leading to elevated proliferation of myeloid cells and acute lymphoblastic leukemia (ALL) originates from an LSC development leading to elevated proliferation of lymphoid cells. In both cases, LSC can be identified by multicolor flow cytometry using several antibodies. However, to date, LSC levels in peripheral blood (PB) are not established well enough in ALL and CML patients. On the other hand, the detection of the minimal residue disease (MRD) in leukemia is mainly based on the identification of the mRNA bcr-abl gene in CML patients and some other genes in ALL patients. There is no a properly biomarker to detect MDR in both types of leukemia. The objective of this study was to determine mRNA bcr-abl and the percentage of LSC in peripheral blood of patients with CML and ALL and identify a possible association between the amount of LSC in PB and clinical data. We included in this study 19 patients with Leukemia. A PB sample was collected per patient and leukocytes were obtained by Ficoll gradient. The immunophenotype for LSC CD34+ was done by flow cytometry analysis with CD33, CD2, CD14, CD16, CD64, HLA-DR, CD13, CD15, CD19, CD10, CD20, CD34, CD38, CD71, CD90, CD117, CD123 monoclonal antibodies. In addition, to identify the presence of the mRNA bcr-abl by RT-PCR, the RNA was isolated using TRIZOL reagent. Molecular (presence of mRNA bcr-abl and LSC CD34+) and clinical results were analyzed with descriptive statistics and a multiple regression analysis was performed to determine statistically significant association. In total, 19 patients (8 patients with ALL and 11 patients with CML) were analyzed, 9 patients with de novo leukemia (ALL = 6 and CML = 3) and 10 under treatment (ALL = 5 and CML = 5). The overall frequency of mRNA bcr-abl was 31% (6/19), and it was negative in ALL patients and positive in 80% in CML patients. On the other hand, LSC was determined in 16/19 leukemia patients (%LSC= 0.02-17.3). The Novo patients had higher percentage of LSC (0.26 to 17.3%) than patients under treatment (0 to 5.93%). The amount of LSC was significantly associated with the amount of LSC were: absence of treatment, the absence of splenomegaly, and a lower number of leukocytes, negative association for the clinical variables age, sex, blasts, and mRNA bcr-abl. In conclusion, patients with de novo leukemia had a higher percentage of circulating LSC than patients under treatment, and it was associated with clinical parameters as lack of treatment, absence of splenomegaly and a lower number of leukocytes. The mRNA bcr-abl detection was only possible in the series of patients with CML, and molecular detection of LSC could be identified in the peripheral blood of all leukemia patients, we believe the identification of circulating LSC may be used as biomarker for the detection of the MRD in leukemia patients.

Keywords: stem cells, leukemia, biomarkers, flow cytometry

Procedia PDF Downloads 355
2009 The Influence of Bentonite on the Rheology of Geothermal Grouts

Authors: A. N. Ghafar, O. A. Chaudhari, W. Oettel, P. Fontana

Abstract:

This study is a part of the EU project GEOCOND-Advanced materials and processes to improve performance and cost-efficiency of shallow geothermal systems and underground thermal storage. In heat exchange boreholes, to improve the heat transfer between the pipes and the surrounding ground, the space between the pipes and the borehole wall is normally filled with geothermal grout. Traditionally, bentonite has been a crucial component in most commercially available geothermal grouts to assure the required stability and impermeability. The investigations conducted in the early stage of this project during the benchmarking tests on some commercial grouts showed considerable sensitivity of the rheological properties of the tested grouts to the mixing parameters, i.e., mixing time and velocity. Further studies on this matter showed that bentonite, which has been one of the important constituents in most grout mixes, was probably responsible for such behavior. Apparently, proper amount of shear should be applied during the mixing process to sufficiently activate the bentonite. The higher the amount of applied shear the more the activation of bentonite, resulting in change in the grout rheology. This explains why, occasionally in the field applications, the flow properties of the commercially available geothermal grouts using different mixing conditions (mixer type, mixing time, mixing velocity) are completely different than expected. A series of tests were conducted on the grout mixes, with and without bentonite, using different mixing protocols. The aim was to eliminate/reduce the sensitivity of the rheological properties of the geothermal grouts to the mixing parameters by replacing bentonite with polymeric (non-clay) stabilizers. The results showed that by replacing bentonite with a proper polymeric stabilizer, the sensitivity of the grout mix on mixing time and velocity was to a great extent diminished. This can be considered as an alternative for the developers/producers of geothermal grouts to provide enhanced materials with less uncertainty in obtained results in the field applications.

Keywords: flow properties, geothermal grout, mixing time, mixing velocity, rheological properties

Procedia PDF Downloads 123
2008 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes

Authors: David S. Byrne

Abstract:

The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.

Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations

Procedia PDF Downloads 14
2007 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy

Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos

Abstract:

Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.

Keywords: decision making, cognitive load, financial literacy, numeracy

Procedia PDF Downloads 181
2006 Hazardous Effects of Metal Ions on the Thermal Stability of Hydroxylammonium Nitrate

Authors: Shweta Hoyani, Charlie Oommen

Abstract:

HAN-based liquid propellants are perceived as potential substitute for hydrazine in space propulsion. Storage stability for long service life in orbit is one of the key concerns for HAN-based monopropellants because of its reactivity with metallic and non-metallic impurities which could entrain from the surface of fuel tanks and the tubes. The end result of this reactivity directly affects the handling, performance and storability of the liquid propellant. Gaseous products resulting from the decomposition of the propellant can lead to deleterious pressure build up in storage vessels. The partial loss of an energetic component can change the ignition and the combustion behavior and alter the performance of the thruster. The effect of largely plausible metals- iron, copper, chromium, nickel, manganese, molybdenum, zinc, titanium and cadmium on the thermal decomposition mechanism of HAN has been investigated in this context. Studies involving different concentrations of metal ions and HAN at different preheat temperatures have been carried out. Effect of metal ions on the decomposition behavior of HAN has been studied earlier in the context of use of HAN as gun propellant. However the current investigation pertains to the decomposition mechanism of HAN in the context of use of HAN as monopropellant for space propulsion. Decomposition onset temperature, rate of weight loss, heat of reaction were studied using DTA- TGA and total pressure rise and rate of pressure rise during decomposition were evaluated using an in-house built constant volume batch reactor. Besides, reaction mechanism and product profile were studied using TGA-FTIR setup. Iron and copper displayed the maximum reaction. Initial results indicate that iron and copper shows sensitizing effect at concentrations as low as 50 ppm with 60% HAN solution at 80°C. On the other hand 50 ppm zinc does not display any effect on the thermal decomposition of even 90% HAN solution at 80°C.

Keywords: hydroxylammonium nitrate, monopropellant, reaction mechanism, thermal stability

Procedia PDF Downloads 421
2005 Induction of Callus and Expression of Compounds in Capsicum Frutescens Supplemented with of 2, 4-D

Authors: Jamilah Syafawati Yaacob, Muhammad Aiman Ramli

Abstract:

Cili padi or Capsicum frutescens is one of capsicum species from nightshade family, Solanaceae. It is famous in Malaysia and is widely used as a food ingredient. Capsicum frutescens also possess vast medicinal properties. The objectives of this study are to determine the most optimum 2,4-D hormone concentration for callus induction from stem explants C. frutescens and the effects of different 2,4-D concentrations on expression of compounds from C. frutescens. Seeds were cultured on MS media without hormones (MS basal media) to yield aseptic seedlings of this species, which were then used to supply explant source for subsequent tissue culture experiments. Stem explants were excised from aseptic seedlings and cultured on MS media supplemented with various concentrations (0.1, 0.3 and 0.5 mg/L) of 2,4-D to induce formation of callus. Fresh weight, dry weight and callus growth percentage in all samples were recorded. The highest mean of dry weight was observed in MS media supplemented with 0.5 mg/L 2,4-D, where 0.4499 ± 0.106 g of callus was produced. The highest percentage of callus growth (16.4%) was also observed in cultures supplemented with 0.5 mg/L 2,4-D. The callus samples were also subjected to HPLC-MS to evaluate the effect of hormone concentration on expression of bio active compounds in different samples. Results showed that caffeoylferuloylquinic acids were present in all samples, but was most abundant in callus cells supplemented with 0.3 & 0.5 mg/L 2,4-D. Interestingly, there was an unknown compound observed to be highly expressed in callus cells supplemented with 0.1 mg/L 2,4-D, but its presence was less significant in callus cells supplemented with 0.3 and 0.5 mg/L 2,4-D. Furthermore, there was also a compound identified as octadecadienoic acid, which was uniquely expressed in callus supplemented with 0.5 mg/L 2,4-D, but absent in callus cells supplemented with 0.1 and 0.3 mg/L 2,4-D. The results obtained in this study indicated that plant growth regulators played a role in expression of secondary metabolites in plants. The increase or decrease of these growth regulators may have triggered a change in the secondary metabolite biosynthesis pathways, thus causing differential expression of compounds in this plant.

Keywords: callus, in vitro, secondary metabolite, 2, 4-Dichlorophenoxyacetic acid

Procedia PDF Downloads 373
2004 The Singapore Innovation Web and Facilitation of Knowledge Processes

Authors: Ola Jon Mork, Irina Emily Hansen

Abstract:

The European Growth Strategy Program calls for more efficient methods for knowledge creation and innovation. This study contributes with new insights into the Singapore Innovation System; more precisely how knowledge processes are facilitated. The research material is collected by visiting the different innovation locations in Singapore and depth interview with key persons. The different innovation actors web sites and brochures have been studied. Governmental reports and figures have also been studied. The findings show that facilitation of Knowledge Processes in the Singapore Innovation System has a basic structure with three processes, which is 1) Idea capturing – 2)Technology and Business Execution – 3)Idea Realization. Dedicated innovation parks work with the most promising entrepreneurs; more precisely: finding the persons with the motivation to 'change the world'. The innovation park will facilitate these entrepreneurs for 100 days, where they also will be connected to a global network of venture capital. And, the entrepreneurs will have access to mentors from these venture companies. Research institutes parks work with the development of world leading technology. To facilitate knowledge development they connect with industrial companies which are the most promising applicators of their technology. Knowledge facilitation is the main purpose, but this cooperation/testing is also serving as a platform for funding. Probably this is cooperation is also attractive for world leading companies. Dedicated innovation parks work with facilitation of innovators of new applications and perfection of products for the end- user. These parks can be specialized in special areas, like health products and life science products. Another example of this is automotive companies giving research call for these parks to develop and innovate new products and services upon their technology. Common characteristics for the knowledge facilitation in the Singapore Innovation System are a short trial period for promising actors, normally 100 days. It is also a strong focus on training of the entrepreneurs. Presentations and diffusion of knowledge is an important part of the facilitation. Funding will be available for the most successful entrepreneurs and innovators.

Keywords: knowledge processes, facilitation, innovation, Singapore innovation web

Procedia PDF Downloads 296
2003 Anti-Diabetic Effect of High Purity Epigallocatechin Gallate from Green Tea

Authors: Hye Jin Choi, Mirim Jin, Jeong June Choi

Abstract:

Green tea, which is one of the most popular of tea, contains various ingredients that help health. Epigallocatechin gallate (EGCG) is one of the main active polyphenolic compound possessing diverse biologically beneficial effects such as anti-oxidation, anti-cancer founding in green tea. This study was performed to investigate the anti-diabetic effect of high-purity EGCG ( > 98%) in a spontaneous diabetic mellitus animal model, db/db mouse. Four-week-old male db/db mice, which was induced to diabetic mellitus by the high-fat diet, were orally administered with high-purity EGCG (10, 50 and 100 mg/kg) for 4 weeks. Daily weight and diet efficiency were examined, and blood glucose level was assessed once a week. After 4 weeks of EGCG administration, fasting blood glucose level was measured. Then, the mice were sacrificed and total abdominal fat was sampled to examine the change in fat weight. Plasma was separated from the blood and the levels of aspartate amino-transferase (ALT) and alanine amino-transferase (AST) were investigated. As results, blood glucose and body weight were significantly decreased by EGCG treatment compared to the control group. Also, the amount of abdominal fat was down-regulated by EGCG. However, ALT and AST levels, which are indicators of liver function, were similar to those of control group. Taken together, our study suggests that high purity EGCG is capable of treating diabetes mellitus based in db / db mice with safety and has a potent to develop a therapeutics for metabolic disorders. This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry (IPET) through High Value-added Food Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA) (317034-03-2-HD030)

Keywords: anti-diabetic effect, db/db mouse, diabetes mellitus, green tea, epigallocatechin gallate

Procedia PDF Downloads 186
2002 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria

Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale

Abstract:

The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.

Keywords: allocative efficiency, DEA, Tobit regression, tuber crop

Procedia PDF Downloads 288
2001 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners

Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas

Abstract:

Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.

Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy

Procedia PDF Downloads 237
2000 Serial Position Curves under Compressively Expanding and Contracting Schedules of Presentation

Authors: Priya Varma, Denis John McKeown

Abstract:

Psychological time, unlike physical time, is believed to be ‘compressive’ in the sense that the mental representations of a series of events may be internally arranged with ever decreasing inter-event spacing (looking back from the most recently encoded event). If this is true, the record within immediate memory of recent events is severely temporally distorted. Although this notion of temporal distortion of the memory record is captured within some theoretical accounts of human forgetting, notably temporal distinctiveness accounts, the way in which the fundamental nature of the distortion underpins memory and forgetting broadly is barely recognised or at least directly investigated. Our intention here was to manipulate the spacing of items for recall in order to ‘reverse’ this supposed natural compression within the encoding of the items. In Experiment 1 three schedules of presentation (expanding, contracting and fixed irregular temporal spacing) were created using logarithmic spacing of the words for both free and serial recall conditions. The results of recall of lists of 7 words showed statistically significant benefits of temporal isolation, and more excitingly the contracting word series (which we may think of as reversing the natural compression within the mental representation of the word list) showed best performance. Experiment 2 tested for effects of active verbal rehearsal in the recall task; this reduced but did not remove the benefits of our temporal scheduling manipulation. Finally, a third experiment used the same design but with Chinese characters as memoranda, in a further attempt to subvert possible verbal maintenance of items. One change to the design here was to introduce a probe item following the sequence of items and record response times to this probe. Together the outcomes of the experiments broadly support the notion of temporal compression within immediate memory.

Keywords: memory, serial position curves, temporal isolation, temporal schedules

Procedia PDF Downloads 217
1999 A Conceptualization of the Relationship between Frontline Service Robots and Humans in Service Encounters and the Effect on Well-Being

Authors: D. Berg, N. Hartley, L. Nasr

Abstract:

This paper presents a conceptual model of human-robot interaction within service encounters and the effect on the well-being of both consumers and service providers. In this paper, service providers are those employees who work alongside frontline service robots. The significance of this paper lies in the knowledge created which outlines how frontline service robots can be effectively utilized in service encounters for the benefit of organizations and society as a whole. As this paper is conceptual in nature, the main methodologies employed are theoretical, namely problematization and theory building. The significance of this paper is underpinned by the shift of service robots from manufacturing plants and factory floors to consumer-facing service environments. This service environment places robots in direct contact with frontline employees and consumers creating a hybrid workplace where humans work alongside service robots. This change from back-end to front-end roles may have implications not only on the physical environment, servicescape, design, and strategy of service offerings and encounters but also on the human parties of the service encounter itself. Questions such as ‘how are frontline service robots impacting and changing the service encounter?’ and ‘what effect are such changes having on the well-being of the human actors in a service encounter?’ spring to mind. These questions form the research question of this paper. To truly understand social service robots, an interdisciplinary perspective is required. Besides understanding the function, system, design or mechanics of a service robot, it is also necessary to understand human-robot interaction. However not simply human-robot interaction, but particularly what happens when such robots are placed in commercial settings and when human-robot interaction becomes consumer-robot interaction and employee-robot interaction? A service robot in this paper is characterized by two main factors; its social characteristics and the consumer-facing environment within which it operates. The conceptual framework presented in this paper contributes to interdisciplinary discussions surrounding social robotics, service, and technology’s impact on consumer and service provider well-being, and hopes that such knowledge will help improve services, as well as the prosperity and well-being of society.

Keywords: frontline service robots, human-robot interaction, service encounters, well-being

Procedia PDF Downloads 207
1998 Videoconference Technology: An Attractive Vehicle for Challenging and Changing Tutors Practice in Open and Distance Learning Environment

Authors: Ramorola Mmankoko Ziphorah

Abstract:

Videoconference technology represents a recent experiment of technology integration into teaching and learning in South Africa. Increasingly, videoconference technology is commonly used as a substitute for the traditional face-to-face approaches to teaching and learning in helping tutors to reshape and change their teaching practices. Interestingly, though, some studies point out that videoconference technology is commonly used for knowledge dissemination by tutors and not so much for the actual teaching of course content in Open and Distance Learning context. Though videoconference technology has become one of the dominating technologies available among Open and Distance Learning institutions, it is not clear that it has been used as effectively to bridge the learning distance in time, geography, and economy. While tutors are prepared theoretically, in most tutor preparation programs, on the use of videoconference technology, there are still no practical guidelines on how they should go about integrating this technology into their course teaching. Therefore, there is an urgent need to focus on tutor development, specifically on their capacities and skills to use videoconference technology. The assumption is that if tutors become competent in the use of the videoconference technology for course teaching, then their use in Open and Distance Learning environment will become more commonplace. This is the imperative of the 4th Industrial Revolution (4IR) on education generally. Against the current vacuum in the practice of using videoconference technology for course teaching, the current study proposes a qualitative phenomenological approach to investigate the efficacy of videoconferencing as an approach to student learning. Using interviews and observation data from ten participants in Open and Distance Learning institution, the author discusses how dialogue and structure interacted to provide the participating tutors with a rich set of opportunities to deliver course content. The findings to this study highlight various challenges experienced by tutors when using videoconference technology. The study suggests tutor development programs on their capacity and skills and on how to integrate this technology with various teaching strategies in order to enhance student learning. The author argues that it is not merely the existence of the structure, namely the videoconference technology, that provides the opportunity for effective teaching, but that is the interactions, namely, the dialogue amongst tutors and learners that make videoconference technology an attractive vehicle for challenging and changing tutors practice.

Keywords: open distance learning, transactional distance, tutor, videoconference

Procedia PDF Downloads 127
1997 Remote Sensing and GIS Based Methodology for Identification of Low Crop Productivity in Gautam Buddha Nagar District

Authors: Shivangi Somvanshi

Abstract:

Poor crop productivity in salt-affected environment in the country is due to insufficient and untimely canal supply to agricultural land and inefficient field water management practices. This could further degrade due to inadequate maintenance of canal network, ongoing secondary soil salinization and waterlogging, worsening of groundwater quality. Large patches of low productivity in irrigation commands are occurring due to waterlogging and salt-affected soil, particularly in the scarcity rainfall year. Satellite remote sensing has been used for mapping of areas of low crop productivity, waterlogging and salt in irrigation commands. The spatial results obtained for these problems so far are less reliable for further use due to rapid change in soil quality parameters over the years. The existing spatial databases of canal network and flow data, groundwater quality and salt-affected soil were obtained from the central and state line departments/agencies and were integrated with GIS. Therefore, an integrated methodology based on remote sensing and GIS has been developed in ArcGIS environment on the basis of canal supply status, groundwater quality, salt-affected soils, and satellite-derived vegetation index (NDVI), salinity index (NDSI) and waterlogging index (NSWI). This methodology was tested for identification and delineation of area of low productivity in the Gautam Buddha Nagar district (Uttar Pradesh). It was found that the area affected by this problem lies mainly in Dankaur and Jewar blocks of the district. The problem area was verified with ground data and was found to be approximately 78% accurate. The methodology has potential to be used in other irrigation commands in the country to obtain reliable spatial data on low crop productivity.

Keywords: remote sensing, GIS, salt affected soil, crop productivity, Gautam Buddha Nagar

Procedia PDF Downloads 284
1996 How Participatory Climate Information Services Assist Farmers to Uptake Rice Disease Forecasts and Manage Diseases in Advance: Evidence from Coastal Bangladesh

Authors: Moriom Akter Mousumi, Spyridon Paparrizos, Fulco Ludwig

Abstract:

Rice yield reduction due to climate change-induced disease occurrence is becoming a great concern for coastal farmers of Bangladesh. The development of participatory climate information services (CIS) based on farmers’ needs could implicitly facilitate farmers to get disease forecasts and make better decisions to manage diseases. Therefore, this study aimed to investigate how participatory climate information services assist coastal rice farmers to take up rice disease forecasts and better manage rice diseases by improving their informed decision-making. Through participatory approaches, we developed a tailor-made agrometeorological service through the DROP app to forecast rice diseases and manage them in advance. During farmers field schools (FFS) we communicated 7-day disease forecasts during face-to-face weekly meetings using printed paper and, messenger app derived from DROP app. Results show that the majority of the farmers understand disease forecasts through visualization, symbols, and text. The majority of them use disease forecast information directly from the DROP app followed by face-to-face meetings, messenger app, and printed paper. Farmers participation and engagement during capacity building training at FFS also assist them in making more informed decisions and improved management of diseases using both preventive measures and chemical measures throughout the rice cultivation period. We conclude that the development of participatory CIS and the associated capacity-building and training of farmers has increased farmers' understanding and uptake of disease forecasts to better manage of rice diseases. Participatory services such as the DROP app offer great potential as an adaptation option for climate-smart rice production under changing climatic conditions.

Keywords: participatory climate service, disease forecast, disease management, informed decision making, coastal Bangladesg

Procedia PDF Downloads 44
1995 Changing the Dynamics of the Regional Water Security in the Mekong River Basin: An Explorative Study Understanding the Cooperation and Conflict from Critical Hydropolitical Perspective

Authors: Richard Grünwald, Wenling Wang, Yan Feng

Abstract:

The presented paper explores the changing dynamics of regional water security in the Mekong River Basin and examines the contemporary water-related challenges from a critical hydropolitical perspective. By drawing on the Lancang-Mekong Cooperation and Conflict Database (LMCCD) recording more than 3000 water-related events within the basin in the last 30 years, we identified several trends changing the dynamics of the regional water security in the Mekong River Basin. Firstly, there is growing politicization of water that is no longer interpreted as abundant. While some scientists blame the rapid basin development, particularly in upstream countries, other researchers consider climate change and cumulative environmental impacts of various water projects as the main culprit for changing the water flow. Secondly, there is an increasing securitization of large-scale hydropower dams with questionable outcomes. Despite hydropower dams raise many controversies, many riparian states push the development at all cost. Such water security dilemma can be especially traced to Laos and Cambodia, which highly invest in the hydropower sector even at the expense of the local environment and good relations with neighbouring countries situated lower on the river. Thirdly, there is a lack of accountable transboundary water governance that will effectively face a looming water crisis. To date, most of the existing cooperation mechanisms are undermined by the geopolitical interests of foreign donors and increasing mistrust to scientific approaches dealing with water insecurity. Our findings are beneficial for the policy-makers and other water experts who want to grasp the broader hydropolitical context in the Mekong River Basin and better understand the new water security threats, including misinterpretation of the hydrological data and legitimization of the pro-development narratives.

Keywords: critical hydropolitics, mekong river, politicization of science, water governance, water security

Procedia PDF Downloads 211
1994 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 158
1993 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 127
1992 Toward an Informed Capacity Development Program in Inclusive and Sustainable Agricultural and Rural Development

Authors: Maria Ana T. Quimbo

Abstract:

As the Southeast Asian Regional Center for Graduate Study and Research in Agriculture (SEARCA) approaches its 50th founding anniversary. It continues to pursue its mission of strengthening the capacities of Southeast Asian leaders and institutions under its reformulated mission of Inclusive and Sustainable Agricultural and Rural Development (ISARD). Guided by this mission, this study analyzed the desired and priority capacity development needs of institutions heads and key personnel toward addressing the constraints, problems, and issues related to agricultural and rural development toward achieving their institutional goals. Adopting an exploratory, descriptive research design, the study examined the competency needs at the institutional and personnel levels. A total of 35 institution heads from seven countries and 40 key personnel from eight countries served as research participants. The results showed a variety of competencies in the areas of leadership and management, agriculture, climate change, research, monitoring, and evaluation, planning, and extension or community service. While mismatch was found in a number of desired and priority competency areas as perceived by the respondents, there were also interesting concordant answers in both technical and non-technical areas. Interestingly, the competency needs both desired and prioritized were a combination of “hard” or technical skills and “soft” or interpersonal skills. Policy recommendations were forwarded on the need to continue building capacities in core competencies along ISARD; have a balance of 'hard' skills and 'soft' skills through the use of appropriate training strategies and explicit statement in training objectives, strengthen awareness on “soft” skills through its integration in workplace culture, build capacity on action research, continue partnerships encourage mentoring, prioritize competencies, and build capacity of desired and priority competency areas.

Keywords: capacity development, competency needs assessment, sustainability and development, ISARD

Procedia PDF Downloads 377
1991 Numerical Investigation of Dynamic Stall over a Wind Turbine Pitching Airfoil by Using OpenFOAM

Authors: Mahbod Seyednia, Shidvash Vakilipour, Mehran Masdari

Abstract:

Computations for two-dimensional flow past a stationary and harmonically pitching wind turbine airfoil at a moderate value of Reynolds number (400000) are carried out by progressively increasing the angle of attack for stationary airfoil and at fixed pitching frequencies for rotary one. The incompressible Navier-Stokes equations in conjunction with Unsteady Reynolds Average Navier-Stokes (URANS) equations for turbulence modeling are solved by OpenFOAM package to investigate the aerodynamic phenomena occurred at stationary and pitching conditions on a NACA 6-series wind turbine airfoil. The aim of this study is to enhance the accuracy of numerical simulation in predicting the aerodynamic behavior of an oscillating airfoil in OpenFOAM. Hence, for turbulence modelling, k-ω-SST with low-Reynolds correction is employed to capture the unsteady phenomena occurred in stationary and oscillating motion of the airfoil. Using aerodynamic and pressure coefficients along with flow patterns, the unsteady aerodynamics at pre-, near-, and post-static stall regions are analyzed in harmonically pitching airfoil, and the results are validated with the corresponding experimental data possessed by the authors. The results indicate that implementing the mentioned turbulence model leads to accurate prediction of the angle of static stall for stationary airfoil and flow separation, dynamic stall phenomenon, and reattachment of the flow on the surface of airfoil for pitching one. Due to the geometry of the studied 6-series airfoil, the vortex on the upper surface of the airfoil during upstrokes is formed at the trailing edge. Therefore, the pattern flow obtained by our numerical simulations represents the formation and change of the trailing-edge vortex at near- and post-stall regions where this process determines the dynamic stall phenomenon.

Keywords: CFD, moderate Reynolds number, OpenFOAM, pitching oscillation, unsteady aerodynamics, wind turbine

Procedia PDF Downloads 201
1990 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 153
1989 An Index to Measure Transportation Sustainable Performance in Construction Projects

Authors: Sareh Rajabi, Taha Anjamrooz, Salwa Bheiry

Abstract:

The continuous increase in the world population, resource shortage and the warning of climate change cause various environmental and social issues to the world. Thus, sustainability concept is much needed nowadays. Organizations are progressively falling under strong worldwide pressure to integrate sustainability practices into their project decision-making development. Construction projects in the industry are amongst the most significant, since it is one of the biggest divisions and of main significance for the national economy and hence has a massive effect on the environment and society. So, it is important to discover approaches to incorporate sustainability into the management of those projects. This study presents a combined sustainability index of projects with sustainable transportation which has been formed as per a comprehensive literature review and survey study. Transportation systems enable the movement of goods and services worldwide, and it is leading to economic growth and creating jobs while creating negative impacts on the environment and society. This research is study to quantify the sustainability indicators, through 1) identifying the importance of sustainable transportation indicators that are based on the sustainable practices used for the construction projects and 2) measure the effectiveness of practices through these indicators on the three sustainable pillars. A total 26 sustainability indicators have been selected and grouped under each related sustainability pillars. A survey was used to collect the opinion about the sustainability indicators by a scoring system. A combined sustainability index considering three sustainable pillars can be helpful in evaluating the transportation sustainable practices of a project and making decisions regarding project selection. In addition to focus on the issue of financial resource allocation in a project selection, the decision-maker could take into account the sustainability as an important key in addition to the project’s return and risk. The purpose of this study is to measure the performance of transportation sustainability which allow companies to assess multiple projects selection. This is useful to decision makers to rank and focus more on future sustainable projects.

Keywords: sustainable transportation, transportation performances, sustainable indicators, sustainable construction practice, sustainability

Procedia PDF Downloads 141
1988 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: machine-learning, habitability, exoplanets, supercomputing

Procedia PDF Downloads 88
1987 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far, has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: exoplanets, habitability, machine-learning, supercomputing

Procedia PDF Downloads 114
1986 Implications of Stakeholder Theory as a Critical Theory

Authors: Louis Hickman

Abstract:

Stakeholder theory is a powerful conception of the firm based on the notion that a primary focus on shareholders is inadequate and, in fact, detrimental to the long-term health of the firm. As such it represents a departure from prevalent business school teachings with their focus on accounting and cost controls. Herein, it is argued that stakeholder theory can be better conceptualized as a critical theory, or one which represents a fundamental change in business behavior and can transform the behavior of businesses if accepted. By arguing that financial interests underdetermine the success of the firm, stakeholder theory further democratizes business by endorsing an increased awareness of the importance of non-shareholder stakeholders. Stakeholder theory requires new, non-financial, measures of success that provide a new consciousness for management and businesses when conceiving their actions and place in society. Thereby, stakeholder theory can show individuals through self-reflection that the capitalist impulses to generate wealth cannot act as primary drivers of business behavior, but rather, that we would choose to support interests outside ourselves if we made the decision in free discussion. This is due to the false consciousness embedded in our capitalism that the firm’s finances are the foremost concern of modern organizations at the expense of other goals. A focus on non-shareholder stakeholders in addition to shareholders generates greater benefits for society by improving the state of customers, employees, suppliers, the community, and shareholders alike. These positive effects generate further positive gains in well-being for stakeholders and translate into increased health for the future firm. Additionally, shareholders are the only stakeholder group that does not provide long-term firm value since there are not always communities with qualified employees, suppliers capable of providing the quality of product needed, or persons with purchasing power for all conceivable products. Therefore, the firm’s long-term health is benefited most greatly by improving the greatest possible parts of the society in which it inhabits, rather than solely the shareholder.

Keywords: capitalism, critical theory, self-reflection, stakeholder theory

Procedia PDF Downloads 345
1985 Investigating Role of Novel Molecular Players in Forebrain Roof-Plate Midline Invagination

Authors: Mohd Ali Abbas Zaidi, Meenu Sachdeva, Jonaki Sen

Abstract:

In the vertebrate embryo, the forebrain anlagen develops from the anterior-most region of the neural tube which is the precursor of the central nervous system (CNS). The roof plate located at the dorsal midline region of the forebrain anlagen, acts as a source of several secreted molecules involved in patterning and morphogenesis of the forebrain. One such key morphogenetic event is the invagination of the forebrain roof plate which results in separation of the single forebrain vesicle into two cerebral hemispheres. Retinoic acid (RA) signaling plays a key role in this process. Blocking RA signaling at the dorsal forebrain midline inhibits dorsal invagination and results in the absence of certain key features of this region, such as thinning of the neuroepithelium and a lowering of cell proliferation. At present we are investigating the possibility of other signaling pathways acting in concert with RA signaling to regulate this process. We have focused on BMP signaling, which we found to be active in a mutually exclusive domain to that of RA signaling within the roof plate. We have also observed that there is a change in BMP signaling activity on modulation of RA signaling indicating an antagonistic relationship between the two. Moreover, constitutive activation of BMP signaling seems to completely inhibit thinning and partially affect invagination, leaving the lowering of cell proliferation in the midline unaffected. We are employing in-silico modeling as well as molecular manipulations to investigate the relative contribution if any, of regional differences in rates of cell proliferation and thinning of the neuroepithelium towards the process of invagination. We have found expression of certain cell adhesion molecules in forebrain roof-plate whose mRNA localization across the thickness of neuroepithelium is influenced by Bmp and RA signaling, giving regional rigidity to roof plate and assisting invagination. We also found expression of certain cytoskeleton modifiers in a localized small domains in invaginating forebrain roof plate suggesting that midline invagination is under control of many factors.

Keywords: bone morphogenetic signaling, cytoskeleton, cell adhesion molecules, forebrain roof plate, retinoic acid signaling

Procedia PDF Downloads 153
1984 Relationship between Different Heart Rate Control Levels and Risk of Heart Failure Rehospitalization in Patients with Persistent Atrial Fibrillation: A Retrospective Cohort Study

Authors: Yongrong Liu, Xin Tang

Abstract:

Background: Persistent atrial fibrillation is a common arrhythmia closely related to heart failure. Heart rate control is an essential strategy for treating persistent atrial fibrillation. Still, the understanding of the relationship between different heart rate control levels and the risk of heart failure rehospitalization is limited. Objective: The objective of the study is to determine the relationship between different levels of heart rate control in patients with persistent atrial fibrillation and the risk of readmission for heart failure. Methods: We conducted a retrospective dual-centre cohort study, collecting data from patients with persistent atrial fibrillation who received outpatient treatment at two tertiary hospitals in central and western China from March 2019 to March 2020. The collected data included age, gender, body mass index (BMI), medical history, and hospitalization frequency due to heart failure. Patients were divided into three groups based on their heart rate control levels: Group I with a resting heart rate of less than 80 beats per minute, Group II with a resting heart rate between 80 and 100 beats per minute, and Group III with a resting heart rate greater than 100 beats per minute. The readmission rates due to heart failure within one year after discharge were statistically analyzed using propensity score matching in a 1:1 ratio. Differences in readmission rates among the different groups were compared using one-way ANOVA. The impact of varying levels of heart rate control on the risk of readmission for heart failure was assessed using the Cox proportional hazards model. Binary logistic regression analysis was employed to control for potential confounding factors. Results: We enrolled a total of 1136 patients with persistent atrial fibrillation. The results of the one-way ANOVA showed that there were differences in readmission rates among groups exposed to different levels of heart rate control. The readmission rates due to heart failure for each group were as follows: Group I (n=432): 31 (7.17%); Group II (n=387): 11.11%; Group III (n=317): 90 (28.50%) (F=54.3, P<0.001). After performing 1:1 propensity score matching for the different groups, 223 pairs were obtained. Analysis using the Cox proportional hazards model showed that compared to Group I, the risk of readmission for Group II was 1.372 (95% CI: 1.125-1.682, P<0.001), and for Group III was 2.053 (95% CI: 1.006-5.437, P<0.001). Furthermore, binary logistic regression analysis, including variables such as digoxin, hypertension, smoking, coronary heart disease, and chronic obstructive pulmonary disease as independent variables, revealed that coronary heart disease and COPD also had a significant impact on readmission due to heart failure (p<0.001). Conclusion: The correlation between the heart rate control level of patients with persistent atrial fibrillation and the risk of heart failure rehospitalization is positive. Reasonable heart rate control may significantly reduce the risk of heart failure rehospitalization.

Keywords: heart rate control levels, heart failure rehospitalization, persistent atrial fibrillation, retrospective cohort study

Procedia PDF Downloads 72
1983 Sensory and Microbiological Sustainability of Smoked Meat Products–Smoked Ham in Order to Determine the Shelf-Life under the Changed Conditions at +15°C

Authors: Radovan Čobanović, Milica Rankov Šicar

Abstract:

The meat is in the group of perishable food which can be spoiled very rapidly if stored at room temperature. Salting in combination with smoke is intended to extend shelf life, and also to form the specific taste, odor and color. The smoke do not affect only on taste and flavor of the product, it has a bactericidal and oxidative effect and that is the reason because smoked products are less susceptible to oxidation and decay processes. According to mentioned the goal of this study was to evaluate shelf life of smoked ham, which is stored in conditions of high temperature (+15 °C). For the purposes of this study analyzes were conducted on eight samples of smoked ham every 7th day from the day of reception until 21st day. During this period, smoked ham is subjected to sensory analysis (appearance, odor, taste, color, aroma) and bacteriological analyzes (Listeria monocytogenes, Salmonella spp. and yeasts and molds) according to Serbian state regulation. All analyses were tested according to ISO methodology: sensory analysis ISO 6658, Listeria monocytogenes ISO 11 290-1, Salmonella spp ISO 6579 and yeasts and molds ISO 21527-2. Results of sensory analysis of smoked ham indicating that the samples after the first seven days of storage showed visual changes at the surface in the form of allocations of salt, most likely due to the process of drying out the internal parts of the product. The sample, after fifteen days of storage had intensive exterior changes, but the taste was still acceptable. Between the fifteenth and twenty-first day of storage, there is an unacceptable change on the surface and inside of the product and the occurrence of molds and yeasts but neither one analyzed pathogen was found. Based on the obtained results it can be concluded that this type of product cannot be stored for more than seven days at an elevated temperature of +15°C because there are a visual changes that would certainly have influence on decision of customers when purchase of this product is concerned.

Keywords: sustainability, smoked meat products, food engineering, agricultural process engineering

Procedia PDF Downloads 359
1982 Bio-Nano Mask: Antivirus and Antimicrobial Mouth Mask Coating with Nano-TiO2 and Anthocyanin Utilization as an Effective Solution of High ARI Patients in Riau

Authors: Annisa Ulfah Pristya, Andi Setiawan

Abstract:

Indonesia placed in sixth rank total Acute Respiratory Infection (ARI) patient in the world and Riau as one of the province with the highest number of people with respiratory infection in Indonesia reached 37 thousand people. Usually society using a mask as prevention action. Unfortunately the commercial mouth mask only can work maximum for 4 hours and the pores are too large to filter out microorganisms and viruses carried by infectious droplets nucleated 1-5 μm. On the other hand, Indonesia is rich with Titanium dioxide (TiO2) and purple sweet potato anthocyanin pigment. Therefore, offered Bio-nano-mask which is a antimicrobial and antiviral mouth mask with Nano-TiO2 coating and purple sweet potato anthocyanins utilization as an effective solution to high ARI patients in Riau, which has the advantage of the mask surface can’t be attached by infectious droplets, self-cleaning and have anthocyanins biosensors that give visual response can be understood easily by the general public in the form of a mask color change from blue/purple to pink when acid levels increase. Acid level is an indicator of microorganisms accumulation in the mouth and surrounding areas. Bio-nano mask making process begins with the preparation (design, Nano-TiO2 liquid preparation, anthocyanins biosensors manufacture) and then superimposing the Nano-TiO2 on the outer surface of spunbond color using a sprayer, then superimposing anthocyanins biosensors film on the Meltdown surface, making bio nano-mask and it pack. Bio-nano mask has the advantage is effectively preventing pathogenic microorganisms and infectious droplets and has accumulated indicator microorganisms that color changes which easily observed by the common people though.

Keywords: anthocyanins, ARI, nano-TiO2 liquid, self cleaning

Procedia PDF Downloads 567
1981 Liquid Food Sterilization Using Pulsed Electric Field

Authors: Tanmaya Pradhan, K. Midhun, M. Joy Thomas

Abstract:

Increasing the shelf life and improving the quality are important objectives for the success of packaged liquid food industry. One of the methods by which this can be achieved is by deactivating the micro-organisms present in the liquid food through pasteurization. Pasteurization is done by heating, but some serious disadvantages such as the reduction in food quality, flavour, taste, colour, etc. were observed because of heat treatment, which leads to the development of newer methods instead of pasteurization such as treatment using UV radiation, high pressure, nuclear irradiation, pulsed electric field, etc. In recent years the use of the pulsed electric field (PEF) for inactivation of the microbial content in the food is gaining popularity. PEF uses a very high electric field for a short time for the inactivation of microorganisms, for which we require a high voltage pulsed power source. Pulsed power sources used for PEF treatments are usually in the range of 5kV to 50kV. Different pulse shapes are used, such as exponentially decaying and square wave pulses. Exponentially decaying pulses are generated by high power switches with only turn-on capacity and, therefore, discharge the total energy stored in the capacitor bank. These pulses have a sudden onset and, therefore, a high rate of rising but have a very slow decay, which yields extra heat, which is ineffective in microbial inactivation. Square pulses can be produced by an incomplete discharge of a capacitor with the help of a switch having both on/off control or by using a pulse forming network. In this work, a pulsed power-based system is designed with the help of high voltage capacitors and solid-state switches (IGBT) for the inactivation of pathogenic micro-organism in liquid food such as fruit juices. The high voltage generator is based on the Marx generator topology, which can produce variable amplitude, frequency, and pulse width according to the requirements. Liquid food is treated in a chamber where pulsed electric field is produced between stainless steel electrodes using the pulsed output voltage of the supply. Preliminary bacterial inactivation tests were performed by subjecting orange juice inoculated with Escherichia Coli bacteria. With the help of the developed pulsed power source and the chamber, the inoculated orange has been PEF treated. The voltage was varied to get a peak electric field up to 15kV/cm. For a total treatment time of 200µs, a 30% reduction in the bacterial count has been observed. The detailed results and analysis will be presented in the final paper.

Keywords: Escherichia coli bacteria, high voltage generator, microbial inactivation, pulsed electric field, pulsed forming line, solid-state switch

Procedia PDF Downloads 183