Search results for: tuning of process parameters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22162

Search results for: tuning of process parameters

3262 Evaluating the Effectiveness of Mesotherapy and Topical 2% Minoxidil for Androgenic Alopecia in Females, Using Topical 2% Minoxidil as a Common Treatment

Authors: Hamed Delrobai Ghoochan Atigh

Abstract:

Androgenic alopecia (AGA) is a common form of hair loss, impacting approximately 50% of females, which leads to reduced self-esteem and quality of life. It causes progressive follicular miniaturization in genetically predisposed individuals. Mesotherapy -- a minimally invasive procedure, topical 2% minoxidil, and oral finasteride have emerged as popular treatment options in the realm of cosmetics. However, the efficacy of mesotherapy compared to other options remains unclear. This study aims to assess the effectiveness of mesotherapy when it is added to topical 2% minoxidil treatment on female androgenic alopecia. Mesotherapy, also known as intradermotherapy, is a technique that entails administering multiple intradermal injections of a carefully composed mixture of compounds in low doses, applied at various points in close proximity to or directly over the affected areas. This study involves a randomized controlled trial with 100 female participants diagnosed with androgenic alopecia. The subjects were randomly assigned to two groups: Group A used topical 2% minoxidil twice daily and took Finastride oral tablet. For Group B, 10 mesotherapy sessions were added to the prior treatment. The injections were administered every week in the first month of treatment, every two weeks in the second month, and after that the injections were applied monthly for four consecutive months. The response assessment was made at baseline, the 4th session, and finally after 6 months when the treatment was complete. Clinical photographs, 7-point Likert scale patient self-evaluation, and 7-point Likert scale assessment tool were used to measure the effectiveness of the treatment. During this evaluation, a significant and visible improvement in hair density and thickness was observed. The study demonstrated a significant increase in treatment efficacy in Group B compared to Group A post-treatment, with no adverse effects. Based on the findings, it appears that mesotherapy offers a significant improvement in female AGA over minoxidil. Hair loss was stopped in Group B after one month and improvement in density and thickness of hair was observed after the third month. The findings from this study provide valuable insights into the efficacy of mesotherapy in treating female androgenic alopecia. Our evaluation offers a detailed assessment of hair growth parameters, enabling a better understanding of the treatments' effectiveness. The potential of this promising technique is significantly enhanced when carried out in a medical facility, guided by appropriate indications and skillful execution. An interesting observation in our study is that in areas where the hair had turned grey, the newly regrown hair does not retain its original grey color; instead, it becomes darker. The results contribute to evidence-based decision-making in dermatological practice and offer different insights into the treatment of female pattern hair loss.

Keywords: androgenic alopecia, female hair loss, mesotherapy, topical 2% minoxidil

Procedia PDF Downloads 102
3261 Effects of Conjugated Linoleic Acid (CLA) on Hormones and Factors Involved in Murine Ovulation

Authors: Leila Karshenas, Hamidreza Khodaei, Behnaz Mahdavi

Abstract:

Ovulation is a physiologic process with an inflammatory response that depends on a coordinated activity of gonadotropins and steroid hormones, as well as inflammatory mediators such as cytokines, prostaglandins, leptin, nitric oxide (NO), etc. Conjugated linoleic acid (CLA) is composed of polyunsaturated fatty acids (PUFA) found in dairy products, beef and lamb. There is strong evidence that dietary CLA affects mediators involved in ovulation. The aim of this study was to determine the effects of different doses of dietary CLA on systemic and local hormones and factors involved in ovulation. In this case-control study, 80 (50±2-day old) female mice were randomly divided into four groups (C as the controls and T1, T2 and T3 as the treatment groups). There were four replicates in each group and there were five mice in every replicate (20 mice, in total). The mice in the control group were fed with no CLA in their diet but the ones in the treatment group received 0.1, 0.3 and 0.5g/kg of CLA (replacing corn oil in the diet), respectively for 120 days. Later on, blood samples were obtained from the tails of animals that displayed estrus signs and estradiol (E2), progesterone (P4), LH, FSH, NO, leptin and TNFα were measured. Furthermore, the effects of CLA on the ovarian production of prostaglandins (PGs) and NO were investigated. The data were analyzed by SAS software.CLA significantly decreased serum levels of FSH (p<0.05), LH, estradiol, NO, leptin and TNFα (p<0.01). In addition, CLA decreased progesterone levels but this effect was statistically insignificant. The significantly negative effects of CLA were seen on the ovarian production of PGE2 and PGF2α (p<0.01).It seems that CLA may play an effective role in reducing the ovulation rate in mice as CLA adversely affected female reproduction and it had negative effects on systemic and local hormones involved in ovulation.

Keywords: conjugated linoleic acid, nitric oxide, ovary, ovulation, prostaglandin, gonadotropin

Procedia PDF Downloads 301
3260 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
3259 Temporal Changes of Heterogeneous Subpopulations of Human Adipose-Derived Stromal/Stem Cells in vitro

Authors: Qiuyue Peng, Vladimir Zachar

Abstract:

The application of adipose-derived stromal/stem cells (ASCs) in regenerative medicine is gaining more awareness due to their advanced translational potential and abundant source preparations. However, ASC-based translation has been confounded by high subpopulation heterogeneity, causing ambiguity about its precise therapeutic value. Some phenotypes defined by a unique combination of positive and negative surface markers have been found beneficial to the required roles. Therefore, the immunophenotypic repertoires of cultured ASCs and temporal changes of distinct subsets were investigated in this study. ASCs from three donors undergoing cosmetic liposuction were cultured in standard culturing methods, and the co-expression patterns based on the combination of selected markers at passages 1, 4, and 8 were analyzed by multi-chromatic flow cytometry. The results showed that the level of heterogeneity of subpopulations of ASCs became lower by in vitro expansion. After a few passages, most of the CD166⁺/CD274⁺/CD271⁺ based subpopulations converged to CD166 single positive cells. Meanwhile, these CD29⁺CD201⁺ double-positive cells, in combination with CD36/Stro-1 expression or without, feathered only the major epitopes and maintained prevailing throughout the whole process. This study suggested that, upon in vitro expansion, the phenotype repertoire of ASCs redistributed and stabilized in a way that cells co-expressing exclusively the strong markers remained dominant. These preliminary findings provide a general overview of the distribution of heterogeneous subsets residents within human ASCs during expansion in vitro. It is a critical step to fully characterize ASCs before clinical application, although the biological effects of heterogeneous subpopulations still need to be clarified.

Keywords: adipose-derived stromal/stem cells, heterogeneity, immunophenotype, subpopulations

Procedia PDF Downloads 114
3258 A Survey on E-Guide to Educational Tour Planning in Environmental Science among Standard Six Primary School Students the Ministry of Education Malaysia

Authors: A.Halim Sahelan, Mohd Halid Abu, Jamaluddin Hashim, Zulisman Maksom, Mohd Afif Md Nasir

Abstract:

This study aims to assess the students' needs for the tour planning e-guide. The study is developing on the contribution and importance of the Educational Tour Planning Guide (ETP) is a multimedia courseware as one of the effective methods in teaching and learning of environmental science among the students in primary schools of the Ministry of Education, Malaysia. It is to provide the student with knowledge and experience about tourism, environmental science activities and process. E-guide to ETP also hopes to strengthen the student understanding toward the subject learns in the tourism environmental science. In order to assess the student's needs on the e-Guide to Educational Tour Planning in Environmental Science, the study has produced a similar e-Guide to ETP in the form as a courseware to be tested during the study. The study has involved several steps in order to be completed. It is such as the formulation of the problem, the review of the literature, the formulation of the study methodology, the production of the e-Guide to ETP, field survey and finally the analyses and discussion made on the data gathered during the study. The survey has involved 100 respondents among the students in standard six primary schools in Kluang Johor. Through the findings, the study indicates that the currently tested product is acceptable among the students in learning environmental science as a guide to plan for the tour. The findings also show a slight difference between the respondents who are using the e-Guide to ETP, and those who are not on the basis of the e-Guide to ETP results. Due the important for the study, the researcher hopes to be having a fair discussion and excellence, recommendation for the development of the product of the current study. This report is written also important to provide a written reference for the future related study.

Keywords: the tour planning e-guide, the Educational Tour Planning Guide, environmental science, multimedia course ware

Procedia PDF Downloads 358
3257 Comparison of Different Hydrograph Routing Techniques in XPSTORM Modelling Software: A Case Study

Authors: Fatema Akram, Mohammad Golam Rasul, Mohammad Masud Kamal Khan, Md. Sharif Imam Ibne Amir

Abstract:

A variety of routing techniques are available to develop surface runoff hydrographs from rainfall. The selection of runoff routing method is very vital as it is directly related to the type of watershed and the required degree of accuracy. There are different modelling softwares available to explore the rainfall-runoff process in urban areas. XPSTORM, a link-node based, integrated storm-water modelling software, has been used in this study for developing surface runoff hydrograph for a Golf course area located in Rockhampton in Central Queensland in Australia. Four commonly used methods, namely SWMM runoff, Kinematic wave, Laurenson, and Time-Area are employed to generate runoff hydrograph for design storm of this study area. In runoff mode of XPSTORM, the rainfall, infiltration, evaporation and depression storage for sub-catchments were simulated and the runoff from the sub-catchment to collection node was calculated. The simulation results are presented, discussed and compared. The total surface runoff generated by SWMM runoff, Kinematic wave and Time-Area methods are found to be reasonably close, which indicates any of these methods can be used for developing runoff hydrograph of the study area. Laurenson method produces a comparatively less amount of surface runoff, however, it creates highest peak of surface runoff among all which may be suitable for hilly region. Although the Laurenson hydrograph technique is widely acceptable surface runoff routing technique in Queensland (Australia), extensive investigation is recommended with detailed topographic and hydrologic data in order to assess its suitability for use in the case study area.

Keywords: ARI, design storm, IFD, rainfall temporal pattern, routing techniques, surface runoff, XPSTORM

Procedia PDF Downloads 453
3256 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 159
3255 Catalytic Production of Hydrogen and Carbon Nanotubes over Metal/SiO2 Core-Shell Catalyst from Plastic Wastes Gasification

Authors: Wei-Jing Li, Ren-Xuan Yang, Kui-Hao Chuang, Ming-Yen Wey

Abstract:

Nowadays, plastic product and utilization are extensive and have greatly improved our life. Yet, plastic wastes are stable and non-biodegradable challenging issues to the environment. Waste-to-energy strategies emerge a promising way for waste management. This work investigated the co-production of hydrogen and carbon nanotubes from the syngas which was from the gasification of polypropylene. A nickel-silica core-shell catalyst was applied for syngas reaction from plastic waste gasification in a fixed-bed reactor. SiO2 were prepared through various synthesis solvents by Stöber process. Ni plays a role as modified SiO2 support, which were synthesized by deposition-precipitation method. Core-shell catalysts have strong interaction between active phase and support, in order to avoid catalyst sintering. Moreover, Fe or Co metal acts as promoter to enhance catalytic activity. The effects of calcined atmosphere, second metal addition, and reaction temperature on hydrogen production and carbon yield were examined. In this study, the catalytic activity and carbon yield results revealed that the Ni/SiO2 catalyst calcined under H2 atmosphere exhibited the best performance. Furthermore, Co promoted Ni/SiO2 catalyst produced 3 times more than Ni/SiO2 on carbon yield at long-term operation. The structure and morphological nature of the calcined and spent catalysts were examined using different characterization techniques including scanning electron microscopy, transmission electron microscopy, X-ray diffraction. In addition, the quality and thermal stability of the nano-carbon materials were also evaluated by Raman spectroscopy and thermogravimetric analysis.

Keywords: plastic wastes, hydrogen, carbon nanotube, core-shell catalysts

Procedia PDF Downloads 319
3254 The Investment of Islamic Education Values toward Children in the Early Age through Story-Telling Method

Authors: Abdul Rofiq Badril Rizal Muzammil

Abstract:

Education is an absolute necessity for human’s life that one must fulfill for the entire life. Without education it is impossible for human to develop her/himself well. The education process is an effort to maintain a good behavior within one’s life. Good behavior will be absolutely achieved if it is taught to early-aged children. This paper focuses on how the story telling method enables teachers to make the students have the construction of good behavior and obtain the goal of national education in Indonesia. The targeted students would involve students in As-Solihin kindergarten, Salafiyah-Syafi’iyah Mumbulsari, Jember, Indonesia. Story is what early-aged children like most. Thus, it is a gorgeous chance to make story telling activity as a method to invest Islamic education values to children. This paper, however, also focuses on some deliberately important aspects which of course teachers need to consider including objectives and strategies of the method’s implementation. The teachers will be in need of knowing each student’s characteristic in the classroom so that it would enable them to select appropriate stories that fit best to early aged students. The selected stories are taken from Islamic stories that tell the life of Prophet and heroes of Islam as well as well-known persons in Islam. In addition, there will be a number of activities done in the classroom after the delivery of the story is over on purpose of leading students to have the fundamental foundation of how to build self-awareness in order they could understand better about the importance of being a well-behaved person. After reviewing relevant theories, secondary research and scholars’ opinion involved in all aspects of early-aged children behavior, the author concludes that by leveraging trusted sources, a proactive, co-operative and creative strategy, the teacher can successfully build up children’s good behavior by instilling the Islamic value toward early-aged children through story telling method.

Keywords: story, Islam, children, early age

Procedia PDF Downloads 307
3253 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks

Authors: Ashkan Ebadi, Adam Krzyzak

Abstract:

Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.

Keywords: tourism, hotel recommender system, hybrid, implicit features

Procedia PDF Downloads 272
3252 Track and Evaluate Cortical Responses Evoked by Electrical Stimulation

Authors: Kyosuke Kamada, Christoph Kapeller, Michael Jordan, Mostafa Mohammadpour, Christy Li, Christoph Guger

Abstract:

Cortico-cortical evoked potentials (CCEP) refer to responses generated by cortical electrical stimulation at distant brain sites. These responses provide insights into the functional networks associated with language or motor functions, and in the context of epilepsy, they can reveal pathological networks. Locating the origin and spread of seizures within the cortex is crucial for pre-surgical planning. This process can be enhanced by employing cortical stimulation at the seizure onset zone (SOZ), leading to the generation of CCEPs in remote brain regions that may be targeted for disconnection. In the case of a 24-year-old male patient suffering from intractable epilepsy, corpus callosotomy was performed as part of the treatment. DTI-MRI imaging, conducted using a 3T MRI scanner for fiber tracking, along with CCEP, is used as part of an assessment for surgical planning. Stimulation of the SOZ, with alternating monophasic pulses of 300µs duration and 15mA current intensity, resulted in CCEPs on the contralateral frontal cortex, reaching a peak amplitude of 206µV with a latency of 31ms, specifically in the left pars triangularis. The related fiber tracts were identified with a two-tensor unscented Kalman filter (UKF) technique, showing transversal fibers through the corpus callosum. The CCEPs were monitored through the progress of the surgery. Notably, the SOZ-associated CCEPs exhibited a reduction following the resection of the anterior portion of the corpus callosum, reaching the identified connecting fibers. This intervention demonstrated a potential strategy for mitigating the impact of intractable epilepsy through targeted disconnection of identified cortical regions.

Keywords: CCEP, SOZ, Corpus callosotomy, DTI

Procedia PDF Downloads 67
3251 The Construct of Personal Choice within Individual Language Shift: A Phenomenological Qualitative Study

Authors: Kira Gulko Morse

Abstract:

Choosing one’s primary language may not be as common as choosing an additional foreign language to study or use during travel. In some instances, however, it becomes a matter of internal personal struggle, as language is tied not only to specific circumstances but also to human background and identity. This phenomenological qualitative study focuses on the factors affecting the decision of a person to undergo a language shift. Specifically, it considers how these factors relate to identity negotiation and expression. The data for the study include the analysis of published autobiographical narratives and personal interviews conducted using the Responsive Interviewing model. While research participants come from a variety of geographical locations and have used different reasons for undergoing their individual language shift, the study identifies a number of common features shared by all the participants. Specifically, while all the participants have been able to maintain their first language to varying degrees of proficiency, they have all completed the shift to establish a primary language different from their first. Additionally, the process of self-identification is found to be directly connected to the phenomenon of language choice for each of the participants. The findings of the study further tie the phenomenon of individual language shift to a more comprehensive issue of individual life choices – ethnic revival, immigration, and inter-cultural marriage among others. The study discusses varying language roles and the data indicate that language shift may occur whether it is a symbolic driving force or a secondary means in fulfilling a set life goal. The concept of language addition is suggested as an alternative to the arbitrariness of language shift. Thus, instead of focusing on subtractive bilingualism or language loss, the emphasis becomes the integration of languages within the individual. The study emphasizes the importance of the construct of personal choice in its connection to individual language shift. It places the focus from society onto an individual and the ability of an individual to make decisions in matters of linguistic identification.

Keywords: choice theory, identity negotiation, language shift, psycholinguistics

Procedia PDF Downloads 135
3250 Proactive Change or Adaptive Response: A Study on the Impact of Digital Transformation Strategy Modes on Enterprise Profitability From a Configuration Perspective

Authors: Jing-Ma

Abstract:

Digital transformation (DT) is an important way for manufacturing enterprises to shape new competitive advantages, and how to choose an effective DT strategy is crucial for enterprise growth and sustainable development. Rooted in strategic change theory, this paper incorporates the dimensions of managers' digital cognition, organizational conditions, and external environment into the same strategic analysis framework and integrates the dynamic QCA method and PSM method to study the antecedent grouping of the DT strategy mode of manufacturing enterprises and its impact on corporate profitability based on the data of listed manufacturing companies in China from 2015 to 2019. We find that the synergistic linkage of different dimensional elements can form six equivalent paths of high-level DT, which can be summarized as the proactive change mode of resource-capability dominated as well as adaptive response mode such as industry-guided resource replenishment. Capacity building under complex environments, market-industry synergy-driven, forced adaptation under peer pressure, and the managers' digital cognition play a non-essential but crucial role in this process. Except for individual differences in the market industry collaborative driving mode, other modes are more stable in terms of individual and temporal changes. However, it is worth noting that not all paths that result in high levels of DT can contribute to enterprise profitability, but only high levels of DT that result from matching the optimization of internal conditions with the external environment, such as industry technology and macro policies, can have a significant positive impact on corporate profitability.

Keywords: digital transformation, strategy mode, enterprise profitability, dynamic QCA, PSM approach

Procedia PDF Downloads 24
3249 Examining Neo-colonialism and Power in Global Surgical Missions: An Historical, Practical and Ethical Analysis

Authors: Alex Knighton, Roba Khundkar, Michael Dunn

Abstract:

Neo-colonialism is defined as the use of economic, political, cultural, or other pressures to control or influence other countries, especially former dependencies, and concerns have been raised about its presence in surgical missions. Surgical missions aim to rectify the huge disparity in surgical access worldwide, but their ethics must be carefully considered. This is especially in light of colonial history which affects international relations and global health today, to ensure that colonial attitudes are not influencing efforts to promote equity. This review examines the history of colonial global health, demonstrating that global health initiatives have consistently been used to benefit those providing them, and then asks whether elements of colonialism are still pervasive in surgical missions today. Data was collected from the literature using specified search terms and snowball searching, as well as from international expert web-based conferences on global surgery ethics. A thematic analysis was then conducted on this data, resulting in the identification of six themes which are identifiable in both past and present global health initiatives. These six themes are power, lack of understanding or respect, feelings of superiority, exploitation, enabling of dependency, and acceptance of poorer standards of care. An ethical analysis follows, concluding that the concerns of power and neo-colonialism in global surgery would be addressed by adopting a framework of procedural justice that promotes a refined governance process in which stakeholders are able to propose and reject decisions that affect them. The paper argues that adopting this model would address concerns of the power disparity in the field directly, as well as promoting an ethical framework to enable the other concerns of power disparity and neo-colonialism identified in the present analysis to be addressed.

Keywords: medical ethics, global surgery, global health, neocolonialism, surgical missions

Procedia PDF Downloads 95
3248 In Vitro Antibacterial Activity of Selected Tanzania Medicinal Plants

Authors: Mhuji Kilonzo, Patrick Ndakidemi, Musa Chacha

Abstract:

Objective: To evaluate antibacterial activity from four selected medicinal plants namely Mystroxylon aethiopicum, Lonchocarpus capassa, Albizia anthelmentica and Myrica salicifolia used for management of bacterial infection in Tanzania. Methods: Minimum Inhibitory Concentration (MIC) of plants extracts against the tested bacterial species was determined by using 96 wells microdilution method. In this method, 50 μL of nutrient broth were loaded in each well followed by 50 μL of extract (100 mg/mL) to make a final volume of 100 μL. Subsequently, 50 μL were transferred from first rows of each well to the second rows and the process was repeated down the columns to the last wells from which 50 μL were discarded. Thereafter, 50 μL of the selected bacterial suspension were added to each well thus making a final volume of 100 μL. The lowest concentration which showed no bacterial growth was considered as MIC. Results: It was revealed that L. capassa leaf ethyl acetate extract exhibited antibacterial activity against Salmonella kisarawe and Salmonella typhi with MIC values of 0.39 and 0.781 mg/mL respectively. Likewise, L. capassa root bark ethyl acetate extracts inhibited growth of S. typhi and E. coli with MIC values of 0.39 and 0.781 mg/mL respectively. The M. aethiopicum leaf and root bark chloroform extracts displayed antibacterial activity against S. kisarawe and S. typhi respectively with MIC value of 0.781 mg/mL. The M. salicifolia stem bark ethyl acetate exhibited antibacterial activity against P. aeruginosa with MIC value of 0.39 mg/mL whereas the methanolic stem and root bark of the same plant inhibited the growth of Proteus mirabilis and Klebsiella pneumoniae with MIC value of 0.781 mg/mL. Conclusion: It was concluded that M. aethiopicum, L. capassa, A. anthelmentica and M. salicifolia are potential source of antibacterial agents. Further studies to establish structures of antibacterial and evaluate active ingredients are recommended.

Keywords: Albizia anthelmentica, Lonchocarpus capassa, Mystroxylon aethiopicum, Myrica salicifolia

Procedia PDF Downloads 219
3247 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images

Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget

Abstract:

In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).

Keywords: agricultural practices, remote sensing, rice, yield

Procedia PDF Downloads 274
3246 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 323
3245 Using Inverted 4-D Seismic and Well Data to Characterise Reservoirs from Central Swamp Oil Field, Niger Delta

Authors: Emmanuel O. Ezim, Idowu A. Olayinka, Michael Oladunjoye, Izuchukwu I. Obiadi

Abstract:

Monitoring of reservoir properties prior to well placements and production is a requirement for optimisation and efficient oil and gas production. This is usually done using well log analyses and 3-D seismic, which are often prone to errors. However, 4-D (Time-lapse) seismic, incorporating numerous 3-D seismic surveys of the same field with the same acquisition parameters, which portrays the transient changes in the reservoir due to production effects over time, could be utilised because it generates better resolution. There is, however dearth of information on the applicability of this approach in the Niger Delta. This study was therefore designed to apply 4-D seismic, well-log and geologic data in monitoring of reservoirs in the EK field of the Niger Delta. It aimed at locating bypassed accumulations and ensuring effective reservoir management. The Field (EK) covers an area of about 1200km2 belonging to the early (18ma) Miocene. Data covering two 4-D vintages acquired over a fifteen-year interval were obtained from oil companies operating in the field. The data were analysed to determine the seismic structures, horizons, Well-to-Seismic Tie (WST), and wavelets. Well, logs and production history data from fifteen selected wells were also collected from the Oil companies. Formation evaluation, petrophysical analysis and inversion alongside geological data were undertaken using Petrel, Shell-nDi, Techlog and Jason Software. Well-to-seismic tie, formation evaluation and saturation monitoring using petrophysical and geological data and software were used to find bypassed hydrocarbon prospects. The seismic vintages were interpreted, and the amounts of change in the reservoir were defined by the differences in Acoustic Impedance (AI) inversions of the base and the monitor seismic. AI rock properties were estimated from all the seismic amplitudes using controlled sparse-spike inversion. The estimated rock properties were used to produce AI maps. The structural analysis showed the dominance of NW-SE trending rollover collapsed-crest anticlines in EK with hydrocarbons trapped northwards. There were good ties in wells EK 27, 39. Analysed wavelets revealed consistent amplitude and phase for the WST; hence, a good match between the inverted impedance and the good data. Evidence of large pay thickness, ranging from 2875ms (11420 TVDSS-ft) to about 2965ms, were found around EK 39 well with good yield properties. The comparison between the base of the AI and the current monitor and the generated AI maps revealed zones of untapped hydrocarbons as well as assisted in determining fluids movement. The inverted sections through EK 27, 39 (within 3101 m - 3695 m), indicated depletion in the reservoirs. The extent of the present non-uniform gas-oil contact and oil-water contact movements were from 3554 to 3575 m. The 4-D seismic approach led to better reservoir characterization, well development and the location of deeper and bypassed hydrocarbon reservoirs.

Keywords: reservoir monitoring, 4-D seismic, well placements, petrophysical analysis, Niger delta basin

Procedia PDF Downloads 116
3244 Protection of the Rights of Outsourced Employees and the Effect on Job Performance in Nigerian Banking Sector

Authors: Abiodun O. Ibude

Abstract:

Several organizations have devised the strategy of engaging the services of staff not directly employed by them in their production and service delivery. Some organizations also engage on contracting another organization to carry out a part of service or production process on their behalf. Outsourcing is becoming an important alternative employment option for most organizations. This paper attempts an exposition on the rights of workers within the more specific context of outsourcing as a human resource management phenomenon. Outsourced employees and their rights are treated conceptually and analytically in a generic sense as a mere subset of the larger whole, that is, labor. Outsourced employees derive their rights, like all workers, from their job context as well as the legal environment (municipal and global) in which they operate. The dynamics of globalization and the implications of this development for labor practices receive considerable attention in this exposition. In this regard, a guarded proposition is made, to examine the practice and effect of engaging outsourcing as an economic decision designed primarily to cut down on operational costs rather than a Human Resources Management decision to improve worker welfare. The population of the study was selected from purposive and simple random sampling techniques. Data obtained were analyzed through a simple percentage, Pearson product-moment correlation, and cross-tabulation. From the research conducted, it was discovered that, although outsourcing possesses opportunities for organizations, there are drawbacks arising from its implementation of job securities. It was also discovered that some employees are being exploited through this strategy. This gives rise to lower motivation and thereby decline in performance. In conclusion, there is need for examination of Human Resource Managers’ strategies that can serve as management policy tools for the protection of the rights of outsourced employees.

Keywords: legal environment, operational cost, outsourcing, protection

Procedia PDF Downloads 127
3243 Motivation and Attitudes toward Learning English and German as Foreign Languages among Sudanese University Students

Authors: A. Ishag, E. Witruk, C. Altmayer

Abstract:

Motivation and attitudes are considered as hypothetical psychological constructs in explaining the process of second language learning. Gardner (1985) – who first systematically investigated the motivational factors in second language acquisition – found that L2 achievement is related not only to the individual learner’s linguistic aptitude or general intelligence but also to the learner’s motivation and interest in learning the target language. Traditionally language learning motivation can be divided into two types: integrative motivation – the desire to integrate oneself with the target culture; and instrumental motivation – the desire to learn a language in order to meet a specific language requirement such as for employment. One of the Gardner’s main ideas is that the integrative motivation plays an important role in second language acquisition. It is directly and positively related to second language achievement more than instrumental motivation. However, the significance of integrative motivation reflects a rather controversial set of findings. On the other hand, Students’ attitudes towards the target language, its speakers and the learning context may all play some part in explaining their success in learning a language. Accordingly, the present study aims at exploring the significance of motivational and attitudinal factors in learning foreign languages, namely English and German among Sudanese undergraduate students from a psycholinguistic and interdisciplinary perspective. The sample composed of 221 students from the English and German language departments respectively at the University of Khartoum in Sudan. The results indicate that English language’s learners are instrumentally motivated and that German language’s learners have positive attitudes towards the German language community and culture. Furthermore, there are statistical significant differences in the attitudes toward the two languages due to gender; where female students have more positive attitudes than their male counterparts. However, there are no differences along the variables of academic grade and study level. Finally, the reasons of studying the English or German language have also been indicated.

Keywords: motivation and attitudes, foreign language learning, english language, german language

Procedia PDF Downloads 683
3242 Software Development for AASHTO and Ethiopian Roads Authority Flexible Pavement Design Methods

Authors: Amare Setegn Enyew, Bikila Teklu Wodajo

Abstract:

The primary aim of flexible pavement design is to ensure the development of economical and safe road infrastructure. However, failures can still occur due to improper or erroneous structural design. In Ethiopia, the design of flexible pavements relies on doing calculations manually and selecting pavement structure from catalogue. The catalogue offers, in eight different charts, alternative structures for combinations of traffic and subgrade classes, as outlined in the Ethiopian Roads Authority (ERA) Pavement Design Manual 2001. Furthermore, design modification is allowed in accordance with the structural number principles outlined in the AASHTO 1993 Guide for Design of Pavement Structures. Nevertheless, the manual calculation and design process involves the use of nomographs, charts, tables, and formulas, which increases the likelihood of human errors and inaccuracies, and this may lead to unsafe or uneconomical road construction. To address the challenge, a software called AASHERA has been developed for AASHTO 1993 and ERA design methods, using MATLAB language. The software accurately determines the required thicknesses of flexible pavement surface, base, and subbase layers for the two methods. It also digitizes design inputs and references like nomographs, charts, default values, and tables. Moreover, the software allows easier comparison of the two design methods in terms of results and cost of construction. AASHERA's accuracy has been confirmed through comparisons with designs from handbooks and manuals. The software can aid in reducing human errors, inaccuracies, and time consumption as compared to the conventional manual design methods employed in Ethiopia. AASHERA, with its validated accuracy, proves to be an indispensable tool for flexible pavement structure designers.

Keywords: flexible pavement design, AASHTO 1993, ERA, MATLAB, AASHERA

Procedia PDF Downloads 63
3241 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 528
3240 Corpus-Based Model of Key Concepts Selection for the Master English Language Course "Government Relations"

Authors: Elena Pozdnyakova

Abstract:

“Government Relations” is a field of knowledge presently taught at the majority of universities around the globe. English as the default language can become the language of teaching since the issues discussed are both global and national in character. However for this field of knowledge key concepts and their word representations in English don’t often coincide with those in other languages. International master’s degree students abroad as well as students, taught the course in English at their national universities, are exposed to difficulties, connected with correct conceptualizing of terminology of GR in British and American academic traditions. The study was carried out during the GR English language course elaboration (pilot research: 2013 -2015) at Moscow State Institute of Foreign Relations (University), Russian Federation. Within this period, English language instructors designed and elaborated the three-semester course of GR. Methodologically the course design was based on elaboration model with the special focus on conceptual elaboration sequence and theoretical elaboration sequence. The course designers faced difficulties in concept selection and theoretical elaboration sequence. To improve the results and eliminate the problems with concept selection, a new, corpus-based approach was worked out. The computer-based tool WordSmith 6.0 was used with the aim to build a model of key concept selection. The corpus of GR English texts consisted of 1 million words (the study corpus). The approach was based on measuring effect size, i.e. the percent difference of the frequency of a word in the study corpus when compared to that in the reference corpus. The results obtained proved significant improvement in the process of concept selection. The corpus-based model also facilitated theoretical elaboration of teaching materials.

Keywords: corpus-based study, English as the default language, key concepts, measuring effect size, model of key concept selection

Procedia PDF Downloads 306
3239 Target-Triggered DNA Motors and their Applications to Biosensing

Authors: Hongquan Zhang

Abstract:

Inspired by endogenous protein motors, researchers have constructed various synthetic DNA motors based on the specificity and predictability of Watson-Crick base pairing. However, the application of DNA motors to signal amplification and biosensing is limited because of low mobility and difficulty in real-time monitoring of the walking process. The objective of our work was to construct a new type of DNA motor termed target-triggered DNA motors that can walk for hundreds of steps in response to a single target binding event. To improve the mobility and processivity of DNA motors, we used gold nanoparticles (AuNPs) as scaffolds to build high-density, three-dimensional tracks. Hundreds of track strands are conjugated to a single AuNP. To enable DNA motors to respond to specific protein and nucleic acid targets, we adapted the binding-induced DNA assembly into the design of the target-triggered DNA motors. In response to the binding of specific target molecules, DNA motors are activated to autonomously walk along AuNP, which is powered by a nicking endonuclease or DNAzyme-catalyzed cleavage of track strands. Each moving step restores the fluorescence of a dye molecule, enabling monitoring of the operation of DNA motors in real time. The motors can translate a single binding event into the generation of hundreds of oligonucleotides from a single nanoparticle. The motors have been applied to amplify the detection of proteins and nucleic acids in test tubes and live cells. The motors were able to detect low pM concentrations of specific protein and nucleic acid targets in homogeneous solutions without the need for separation. Target-triggered DNA motors are significant for broadening applications of DNA motors to molecular sensing, cell imagining, molecular interaction monitoring, and controlled delivery and release of therapeutics.

Keywords: biosensing, DNA motors, gold nanoparticles, signal amplification

Procedia PDF Downloads 84
3238 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor

Authors: Mitali Saha, Soma Das

Abstract:

The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.

Keywords: coconut oil, CCNT, cholesterol, biosensor

Procedia PDF Downloads 282
3237 Theoretical Evaluation of Minimum Superheat, Energy and Exergy in a High-Temperature Heat Pump System Operating with Low GWP Refrigerants

Authors: Adam Y. Sulaiman, Donal F. Cotter, Ming J. Huang, Neil J. Hewitt

Abstract:

Suitable low global warming potential (GWP) refrigerants that conform to F-gas regulations are required to extend the operational envelope of high-temperature heat pumps (HTHPs) used for industrial waste heat recovery processes. The thermophysical properties and characteristics of these working fluids need to be assessed to provide a comprehensive understanding of operational effectiveness in HTHP applications. This paper presents the results of a theoretical simulation to investigate a range of low-GWP refrigerants and their suitability to supersede refrigerants HFC-245fa and HFC-365mfc. A steady-state thermodynamic model of a single-stage HTHP with an internal heat exchanger (IHX) was developed to assess system cycle characteristics at temperature ranges between 50 to 80 °C heat source and 90 to 150 °C heat sink. A practical approach to maximize the operational efficiency was examined to determine the effects of regulating minimum superheat within the process and subsequent influence on energetic and exergetic efficiencies. A comprehensive map of minimum superheat across the HTHP operating variables were used to assess specific tipping points in performance at 30 and 70 K temperature lifts. Based on initial results, the refrigerants HCFO-1233zd(E) and HFO-1336mzz(Z) were found to be closely aligned matches for refrigerants HFC-245fa and HFC-365mfc. The overall results show effective performance for HCFO-1233zd(E) occurs between 5-7 K minimum superheat, and HFO-1336mzz(Z) between 18-21 K dependant on temperature lift. This work provides a method to optimize refrigerant selection based on operational indicators to maximize overall HTHPs system performance.

Keywords: high-temperature heat pump, minimum superheat, energy & exergy efficiency, low GWP refrigerants

Procedia PDF Downloads 184
3236 Localized Detection of ᴅ-Serine by Using an Enzymatic Amperometric Biosensor and Scanning Electrochemical Microscopy

Authors: David Polcari, Samuel C. Perry, Loredano Pollegioni, Matthias Geissler, Janine Mauzeroll

Abstract:

ᴅ-serine acts as an endogenous co-agonist for N-methyl-ᴅ-aspartate receptors in neuronal synapses. This makes it a key component in the development and function of a healthy brain, especially given its role in several neurodegenerative diseases such as Alzheimer’s disease and dementia. Despite such clear research motivations, the primary site and mechanism of ᴅ-serine release is still currently unclear. For this reason, we are developing a biosensor for the detection of ᴅ-serine utilizing a microelectrode in combination with a ᴅ-amino acid oxidase enzyme, which produces stoichiometric quantities of hydrogen peroxide in response to ᴅ-serine. For the fabrication of a biosensor with good selectivity, we use a permselective poly(meta-phenylenediamine) film to ensure only the target molecule is reacted, according to the size exclusion principle. In this work, we investigated the effect of the electrodeposition conditions used on the biosensor’s response time and selectivity. Careful optimization of the fabrication process allowed for enhanced biosensor response time. This allowed for the real time sensing of ᴅ-serine in a bulk solution, and also provided in means to map the efflux of ᴅ-serine in real time. This was done using scanning electrochemical microscopy (SECM) with the optimized biosensor to measure localized release of ᴅ-serine from an agar filled glass capillary sealed in an epoxy puck, which acted as a model system. The SECM area scan simultaneously provided information regarding the rate of ᴅ-serine flux from the model substrate, as well as the size of the substrate itself. This SECM methodology, which provides high spatial and temporal resolution, could be useful to investigate the primary site and mechanism of ᴅ-serine release in other biological samples.

Keywords: ᴅ-serine, enzymatic biosensor, microelectrode, scanning electrochemical microscopy

Procedia PDF Downloads 228
3235 Applying Resilience Engineering to improve Safety Management in a Construction Site: Design and Validation of a Questionnaire

Authors: M. C. Pardo-Ferreira, J. C. Rubio-Romero, M. Martínez-Rojas

Abstract:

Resilience Engineering is a new paradigm of safety management that proposes to change the way of managing the safety to focus on the things that go well instead of the things that go wrong. Many complex and high-risk sectors such as air traffic control, health care, nuclear power plants, railways or emergencies, have applied this new vision of safety and have obtained very positive results. In the construction sector, safety management continues to be a problem as indicated by the statistics of occupational injuries worldwide. Therefore, it is important to improve safety management in this sector. For this reason, it is proposed to apply Resilience Engineering to the construction sector. The Construction Phase Health and Safety Plan emerges as a key element for the planning of safety management. One of the key tools of Resilience Engineering is the Resilience Assessment Grid that allows measuring the four essential abilities (respond, monitor, learn and anticipate) for resilient performance. The purpose of this paper is to develop a questionnaire based on the Resilience Assessment Grid, specifically on the ability to learn, to assess whether a Construction Phase Health and Safety Plans helps companies in a construction site to implement this ability. The research process was divided into four stages: (i) initial design of a questionnaire, (ii) validation of the content of the questionnaire, (iii) redesign of the questionnaire and (iii) application of the Delphi method. The questionnaire obtained could be used as a tool to help construction companies to evolve from Safety-I to Safety-II. In this way, companies could begin to develop the ability to learn, which will serve as a basis for the development of the other abilities necessary for resilient performance. The following steps in this research are intended to develop other questions that allow evaluating the rest of abilities for resilient performance such as monitoring, learning and anticipating.

Keywords: resilience engineering, construction sector, resilience assessment grid, construction phase health and safety plan

Procedia PDF Downloads 137
3234 Arothron Stellatus Fish Skin Collagen Based Composite Biosheet Incorporated with Mupirocin as a Potential Dermal Substitute for Skin Tissue Regeneration

Authors: Giriprasath Ramanathan, Sivakumar Singaravelu, M. D. Raja, Uma Tirichurapalli Sivagnanam

Abstract:

Collagen is the abundant protein found in the skin of the animal body that has been designed to provide adequate structural support for the adhesion of cells. The dressing material widely used for tissue engineering and biomedical application has to posses good swelling and biological property for the absorption of exudates and cell proliferation. Acid solubilised collagen from the fish skin of the Arothron stellatus was extracted. The collagen with hydroxypropyl and carboxy methyl cellulose has the better biological property to enhance the healing efficiency. The inter property of collagen with interesting perspectives in the tissue engineering process leads to the development of biomaterial with natural polymer with biologically derived collagen. Keeping this as an objective, the composite biomaterial was fabricated to improve the wound healing and biological properties. In this study the collagen from Arothron stellatus fish skin (ACO) was uniformly blended separately with hydroxypropyl methyl cellulose (HPMC) and carboxyl methyl cellulose (CMC) as biosheets. The casted biosheets were impregnated with mupirocin to get rid of infection from the microbes. Further, the results obtained from differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), tensile studies and biocompatibility of the biosheets were assessed. The swelling, porosity and degradation of the casted biosheets were studied to make the biosheets as a suitable wound dressing material. ACO-HPMC and ACO-CMC biosheets both showed good results, but ACO-HPMC biosheet showed better results than ACO-CMC and hence it can be used as a potential dermal substitute in skin tissue engineering.

Keywords: arothron stellatus, biocompatibility, collagen, tensile strenght

Procedia PDF Downloads 321
3233 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 325