Search results for: group process model
1243 Repair of Thermoplastic Composites for Structural Applications
Authors: Philippe Castaing, Thomas Jollivet
Abstract:
As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic
Procedia PDF Downloads 3041242 Sustainable Harvesting, Conservation and Analysis of Genetic Diversity in Polygonatum Verticillatum Linn.
Authors: Anchal Rana
Abstract:
Indian Himalayas with their diverse climatic conditions are home to many rare and endangered medicinal flora. One such species is Polygonatum verticillatum Linn., popularly known as King Solomon’s Seal or Solomon’s Seal. Its mention as an incredible medicinal herb comes from 5000 years ago in Indian Materia Medica as a component of Ashtavarga, a poly-herbal formulation comprising of eight herbs illustrated as world’s first ever revitalizing and rejuvenating nutraceutical food, which is now commercialised in the name ‘Chaywanprash’. It is an erect tall (60 to 120 cm) perennial herb with sessile, linear leaves and white pendulous flowers. The species grows well in an altitude range of 1600 to 3600 m amsl, and propagates mostly through rhizomes. The rhizomes are potential source for significant phytochemicals like flavonoids, phenolics, lectins, terpenoids, allantoin, diosgenin, β-Sitosterol and quinine. The presence of such phytochemicals makes the species an asset for antioxidant, cardiotonic, demulcent, diuretic, energizer, emollient, aphrodisiac, appetizer, glactagogue, etc. properties. Having profound concentrations of macro and micronutrients, species has fine prospects of being used as a diet supplement. However, due to unscientific and gregarious uprooting, it has been assigned a status of ‘vulnerable’ and ‘endangered’ in the Conservation Assessment and Management Plan (CAMP) process conducted by Foundation for Revitalisation of Local Health Traditions (FRLHT) during 2010, according to IUCN Red-List Criteria. Further, destructive harvesting, land use disturbances, heavy livestock grazing, climatic changes and habitat fragmentation have substantially contributed towards anomaly of the species. It, therefore, became imperative to conserve the diversity of the species and make judicious use in future research and commercial programme and schemes. A Gene Bank was therefore established at High Altitude Herbal Garden of the Forest Research Institute, Dehradun, India situated at Chakarata (30042’52.99’’N, 77051’36.77’’E, 2205 m amsl) consisting 149 accessions collected from thirty-one geographical locations spread over three Himalayan States of Jammu and Kashmir, Himachal Pradesh, and Uttarakhand. The present investigations purport towards sampling and collection of divergent germplasm followed by planting and cultivation techniques. The ultimate aim is thereby focussed on analysing genetic diversity of the species and capturing promising genotypes for carrying out further genetic improvement programme so to contribute towards sustainable development and healthcare.Keywords: Polygonatum verticillatum Linn., phytochemicals, genetic diversity, conservation, gene bank
Procedia PDF Downloads 1711241 Assessment of the Environmental Compliance at the Jurassic Production Facilities towards HSE MS Procedures and Kuwait Environment Public Authority Regulations
Authors: Fatemah Al-Baroud, Sudharani Shreenivas Kshatriya
Abstract:
Kuwait Oil Company (KOC) is one of the companies for gas & oil production in Kuwait. The oil and gas industry is truly global; with operations conducted in every corner of the globe, the global community will rely heavily on oil and gas supplies. KOC has made many commitments to protect all due to KOC’s operations and operational releases. As per KOC’s strategy, the substantial increase in production activities will bring many challenges in managing various environmental hazards and stresses in the company. In order to handle those environmental challenges, the need of implementing effectively the health, safety, and environmental management system (HSEMS) is significant. And by implementing the HSEMS system properly, the environmental aspects of the activities, products, and services were identified, evaluated, and controlled in order to (i) Comply with local regulatory and other obligatory requirements; (ii) Comply with company policy and business requirements; and (iii) Reduce adverse environmental impact, including adverse impact to company reputation. Assessments for the Jurassic Production Facilities are being carried out as a part of the KOC HSEMS procedural requirement and monitoring the implementation of the relevant HSEMS procedures in the facilities. The assessments have been done by conducting series of theme audits using KOC’s audit protocol at JPFs. The objectives of the audits are to evaluate the compliance of the facilities towards the implementation of environmental procedures and the status of the KEPA requirement at all JPFs. The list of the facilities that were covered during the theme audit program are the following: (1) Jurassic Production Facility (JPF) – Sabriya (2) Jurassic Production Facility (JPF) – East Raudhatian (3) Jurassic Production Facility (JPF) – West Raudhatian (4)Early Production Facility (EPF 50). The auditing process comprehensively focuses on the application of KOC HSE MS procedures at JPFs and their ability to reduce the resultant negative impacts on the environment from the operations. Number of findings and observations were noted and highlighted in the audit reports and sent to all concerned controlling teams. The results of these audits indicated that the facilities, in general view, were in line with KOC HSE Procedures, and there were commitments in documenting all the HSE issues in the right records and plans. Further, implemented several control measures at JPFs that minimized/reduced the environmental impact, such as SRU were installed for sulphur recovery. Future scope and monitoring audit after a sufficient period of time will be carried out in conjunction with the controlling teams in order to verify the current status of the recommendations and evaluate the contractors' performance towards the required actions in preserving the environment.Keywords: assessment of the environmental compliance, environmental and social impact assessment, kuwait environment public authority regulations, health, safety and environment management procedures, jurassic production facilities
Procedia PDF Downloads 1851240 Effect of Malnutrition at Admission on Length of Hospital Stay among Adult Surgical Patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia: Prospective Cohort Study, 2022
Authors: Yoseph Halala Handiso, Zewdi Gebregziabher
Abstract:
Background: Malnutrition in hospitalized patients remains a major public health problem in both developed and developing countries. Despite the fact that malnourished patients are more prone to stay longer in hospital, there is limited data regarding the magnitude of malnutrition and its effect on length of stay among surgical patients in Ethiopia, while nutritional assessment is also often a neglected component of the health service practice. Objective: This study aimed to assess the prevalence of malnutrition at admission and its effect on the length of hospital stay among adult surgical patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia, 2022. Methods: A facility-based prospective cohort study was conducted among 398 adult surgical patients admitted to the hospital. Participants in the study were chosen using a convenient sampling technique. Subjective global assessment was used to determine the nutritional status of patients with a minimum stay of 24 hours within 48 hours after admission (SGA). Data were collected using the open data kit (ODK) version 2022.3.3 software, while Stata version 14.1 software was employed for statistical analysis. The Cox regression model was used to determine the effect of malnutrition on the length of hospital stay (LOS) after adjusting for several potential confounders taken at admission. Adjusted hazard ratio (HR) with a 95% confidence interval was used to show the effect of malnutrition. Results: The prevalence of hospital malnutrition at admission was 64.32% (95% CI: 59%-69%) according to the SGA classification. Adult surgical patients who were malnourished at admission had higher median LOS (12 days: 95% CI: 11-13) as compared to well-nourished patients (8 days: 95% CI: 8-9), means adult surgical patients who were malnourished at admission were at higher risk of reduced chance of discharge with improvement (prolonged LOS) (AHR: 0.37, 95% CI: 0.29-0.47) as compared to well-nourished patients. Presence of comorbidity (AHR: 0.68, 95% CI: 0.50-90), poly medication (AHR: 0.69, 95% CI: 0.55-0.86), and history of admission (AHR: 0.70, 95% CI: 0.55-0.87) within the previous five years were found to be the significant covariates of the length of hospital stay (LOS). Conclusion: The magnitude of hospital malnutrition at admission was found to be high. Malnourished patients at admission had a higher risk of prolonged length of hospital stay as compared to well-nourished patients. The presence of comorbidity, polymedication, and history of admission were found to be the significant covariates of LOS. All stakeholders should give attention to reducing the magnitude of malnutrition and its covariates to improve the burden of LOS.Keywords: effect of malnutrition, length of hospital stay, surgical patients, Ethiopia
Procedia PDF Downloads 651239 Festival Gamification: Conceptualization and Scale Development
Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching
Abstract:
Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.Keywords: festival gamification, festival tourism, scale development, self-determination theory
Procedia PDF Downloads 1461238 Evaluating Value of Users' Personal Information Based on Cost-Benefit Analysis
Authors: Jae Hyun Park, Sangmi Chai, Minkyun Kim
Abstract:
As users spend more time on the Internet, the probability of their personal information being exposed has been growing. This research has a main purpose of investigating factors and examining relationships when Internet users recognize their value of private information with a perspective of an economic asset. The study is targeted on Internet users, and the value of their private information will be converted into economic figures. Moreover, how economic value changes in relation with individual attributes, dealer’s traits, circumstantial properties will be studied. In this research, the changes in factors on private information value responding to different situations will be analyzed in an economic perspective. Additionally, this study examines the associations between users’ perceived risk and value of their personal information. By using the cost-benefit analysis framework, the hypothesis that the user’s sense in private information value can be influenced by individual attributes and situational properties will be tested. Therefore, this research will attempt to provide answers for three research objectives. First, this research will identify factors that affect value recognition of users’ personal information. Second, it provides evidences that there are differences on information system users’ economic value of information responding to personal, trade opponent, and situational attributes. Third, it investigates the impact of those attributes on individuals’ perceived risk. Based on the assumption that personal, trade opponent and situation attributes make an impact on the users’ value recognition on private information, this research will present the understandings on the different impacts of those attributes in recognizing the value of information with the economic perspective and prove the associative relationships between perceived risk and decision on the value of users’ personal information. In order to validate our research model, this research used the regression methodology. Our research results support that information breach experience and information security systems is associated with users’ perceived risk. Information control and uncertainty are also related to users’ perceived risk. Therefore, users’ perceived risk is considered as a significant factor on evaluating the value of personal information. It can be differentiated by trade opponent and situational attributes. This research presents new perspective on evaluating the value of users’ personal information in the context of perceived risk, personal, trade opponent and situational attributes. It fills the gap in the literature by providing how users’ perceived risk are associated with personal, trade opponent and situation attitudes in conducting business transactions with providing personal information. It adds to previous literature that the relationship exists between perceived risk and the value of users’ private information in the economic perspective. It also provides meaningful insights to the managers that in order to minimize the cost of information breach, managers need to recognize the value of individuals’ personal information and decide the proper amount of investments on protecting users’ online information privacy.Keywords: private information, value, users, perceived risk, online information privacy, attributes
Procedia PDF Downloads 2391237 Comparison of Microstructure, Mechanical Properties and Residual Stresses in Laser and Electron Beam Welded Ti–5Al–2.5Sn Titanium Alloy
Authors: M. N. Baig, F. N. Khan, M. Junaid
Abstract:
Titanium alloys are widely employed in aerospace, medical, chemical, and marine applications. These alloys offer many advantages such as low specific weight, high strength to weight ratio, excellent corrosion resistance, high melting point and good fatigue behavior. These attractive properties make titanium alloys very unique and therefore they require special attention in all areas of processing, especially welding. In this work, 1.6 mm thick sheets of Ti-5Al-2,5Sn, an alpha titanium (α-Ti) alloy, were welded using electron beam (EBW) and laser beam (LBW) welding processes to achieve a full penetration Bead-on Plate (BoP) configuration. The weldments were studied using polarized optical microscope, SEM, EDS and XRD. Microhardness distribution across the weld zone and smooth and notch tensile strengths of the weldments were also recorded. Residual stresses using Hole-drill Strain Measurement (HDSM) method and deformation patterns of the weldments were measured for the purpose of comparison of the two welding processes. Fusion zone widths of both EBW and LBW weldments were found to be approximately equivalent owing to fairly similar high power densities of both the processes. Relatively less oxide content and consequently high joint quality were achieved in EBW weldment as compared to LBW due to vacuum environment and absence of any shielding gas. However, an increase in heat-affected zone width and partial ά-martensitic transformation infusion zone of EBW weldment were observed because of lesser cooling rates associated with EBW as compared with LBW. The microstructure infusion zone of EBW weldment comprised both acicular α and ά martensite within the prior β grains whereas complete ά martensitic transformation was observed within the fusion zone of LBW weldment. Hardness of the fusion zone in EBW weldment was found to be lower than the fusion zone of LBW weldment due to the observed microstructural differences. Notch tensile specimen of LBW exhibited higher load capacity, ductility, and absorbed energy as compared with EBW specimen due to the presence of high strength ά martensitic phase. It was observed that the sheet deformation and deformation angle in EBW weldment were more than LBW weldment due to relatively more heat retention in EBW which led to more thermal strains and hence higher deformations and deformation angle. The lowest residual stresses were found in LBW weldments which were tensile in nature. This was owing to high power density and higher cooling rates associated with LBW process. EBW weldment exhibited highest compressive residual stresses due to which the service life of EBW weldment is expected to improve.Keywords: Laser and electron beam welding, Microstructure and mechanical properties, Residual stress and distortions, Titanium alloys
Procedia PDF Downloads 2261236 Effect of the Orifice Plate Specifications on Coefficient of Discharge
Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer
Abstract:
On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications
Procedia PDF Downloads 1191235 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing
Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari
Abstract:
A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.Keywords: bacteria chromosome, bacterial identification, sequence, primer generation
Procedia PDF Downloads 1931234 The Importance of Efficient and Sustainable Water Resources Management and the Role of Artificial Intelligence in Preventing Forced Migration
Authors: Fateme Aysin Anka, Farzad Kiani
Abstract:
Forced migration is a situation in which people are forced to leave their homes against their will due to political conflicts, wars and conflicts, natural disasters, climate change, economic crises, or other emergencies. This type of migration takes place under conditions where people cannot lead a sustainable life due to reasons such as security, shelter and meeting their basic needs. This type of migration may occur in connection with different factors that affect people's living conditions. In addition to these general and widespread reasons, water security and resources will be one that is starting now and will be encountered more and more in the future. Forced migration may occur due to insufficient or depleted water resources in the areas where people live. In this case, people's living conditions become unsustainable, and they may have to go elsewhere, as they cannot obtain their basic needs, such as drinking water, water used for agriculture and industry. To cope with these situations, it is important to minimize the causes, as international organizations and societies must provide assistance (for example, humanitarian aid, shelter, medical support and education) and protection to address (or mitigate) this problem. From the international perspective, plans such as the Green New Deal (GND) and the European Green Deal (EGD) draw attention to the need for people to live equally in a cleaner and greener world. Especially recently, with the advancement of technology, science and methods have become more efficient. In this regard, in this article, a multidisciplinary case model is presented by reinforcing the water problem with an engineering approach within the framework of the social dimension. It is worth emphasizing that this problem is largely linked to climate change and the lack of a sustainable water management perspective. As a matter of fact, the United Nations Development Agency (UNDA) draws attention to this problem in its universally accepted sustainable development goals. Therefore, an artificial intelligence-based approach has been applied to solve this problem by focusing on the water management problem. The most general but also important aspect in the management of water resources is its correct consumption. In this context, the artificial intelligence-based system undertakes tasks such as water demand forecasting and distribution management, emergency and crisis management, water pollution detection and prevention, and maintenance and repair control and forecasting.Keywords: water resource management, forced migration, multidisciplinary studies, artificial intelligence
Procedia PDF Downloads 861233 Phenomenology of Child Labour in Estates, Farms and Plantations in Zimbabwe: A Comparative Analysis of Tanganda and Eastern Highlands Tea Estates
Authors: Chupicai Manuel
Abstract:
The global efforts to end child labour have been increasingly challenged by adages of global capitalism, inequalities and poverty affecting the global south. In the face the of rising inequalities whose origin can be explained from historical and political economy analysis between the poor and the rich countries, child labour is also on the rise particularly on the global south. The socio-economic and political context of Zimbabwe has undergone serious transition from colonial times through the post-independence normally referred to as the transition period up to the present day. These transitions have aided companies and entities in the business and agriculture sector to exploit child labour while country provided conditions that enhance child labour due to vulnerability of children and anomic child welfare system that plagued the country. Children from marginalised communities dominated by plantations and farms are affected most. This paper explores the experiences and perceptions of children working in tea estates, plantations and farms, and the adults who formerly worked in these plantations during their childhood to share their experiences and perceptions on child labour in Zimbabwe. Childhood theories that view children as apprentices and a human rights perspectives were employed to interrogate the concept of childhood, child labour and poverty alleviation strategies. Phenomenological research design was adopted to describe the experiences of children working in plantations and interpret the meanings they have on their work and livelihoods. The paper drew form 30 children from two plantations through semi-structured interviews and 15 key informant interviews from civil society organisations, international labour organisation, adults who formerly worked in the plantations and the personnel of the plantations. The findings of the study revealed that children work on the farms as an alternative model for survival against economic challenges while the majority cited that poverty compel them to work and get their fees and food paid for. Civil society organisations were of the view that child rights are violated and the welfare system of the country is malfunctional. The perceptions of the majority of the children interviewed are that the system on the plantations is better and this confirmed the socio-constructivist theory that views children as apprentices. The study recommended child sensitive policies and welfare regime that protects children from exploitation together with policing and legal measures that secure child rights.Keywords: child labour, child rights, phenomenology, poverty reduction
Procedia PDF Downloads 2561232 The Impact of the Macro-Level: Organizational Communication in Undergraduate Medical Education
Authors: Julie M. Novak, Simone K. Brennan, Lacey Brim
Abstract:
Undergraduate medical education (UME) curriculum notably addresses micro-level communications (e.g., patient-provider, intercultural, inter-professional), yet frequently under-examines the role and impact of organizational communication, a more macro-level. Organizational communication, however, functions as foundation and through systemic structures of an organization and thereby serves as hidden curriculum and influences learning experiences and outcomes. Yet, little available research exists fully examining how students experience organizational communication while in medical school. Extant literature and best practices provide insufficient guidance for UME programs, in particular. The purpose of this study was to map and examine current organizational communication systems and processes in a UME program. Employing a phenomenology-grounded and participatory approach, this study sought to understand the organizational communication system from medical students' perspective. The research team consisted of a core team and 13 medical student co-investigators. This research employed multiple methods, including focus groups, individual interviews, and two surveys (one reflective of focus group questions, the other requesting students to submit ‘examples’ of communications). To provide context for student responses, nonstudent participants (faculty, administrators, and staff) were sampled, as they too express concerns about communication. Over 400 students across all cohorts and 17 nonstudents participated. Data were iteratively analyzed and checked for triangulation. Findings reveal the complex nature of organizational communication and student-oriented communications. They reveal program-impactful strengths, weaknesses, gaps, and tensions and speak to the role of organizational communication practices influencing both climate and culture. With regard to communications, students receive multiple, simultaneous communications from multiple sources/channels, both formal (e.g., official email) and informal (e.g., social media). Students identified organizational strengths including the desire to improve student voice, and message frequency. They also identified weaknesses related to over-reliance on emails, numerous platforms with inconsistent utilization, incorrect information, insufficient transparency, assessment/input fatigue, tacit expectations, scheduling/deadlines, responsiveness, and mental health confidentiality concerns. Moreover, they noted gaps related to lack of coordination/organization, ambiguous point-persons, student ‘voice-only’, open communication loops, lack of core centralization and consistency, and mental health bridges. Findings also revealed organizational identity and cultural characteristics as impactful on the medical school experience. Cultural characteristics included program size, diversity, urban setting, student organizations, community-engagement, crisis framing, learning for exams, inefficient bureaucracy, and professionalism. Moreover, they identified system structures that do not always leverage cultural strengths or reduce cultural problematics. Based on the results, opportunities for productive change are identified. These include leadership visibly supporting and enacting overall organizational narratives, making greater efforts in consistently ‘closing the loop’, regularly sharing how student input effects change, employing strategies of crisis communication more often, strengthening communication infrastructure, ensuring structures facilitate effective operations and change efforts, and highlighting change efforts in informational communication. Organizational communication and communications are not soft-skills, or of secondary concern within organizations, rather they are foundational in nature and serve to educate/inform all stakeholders. As primary stakeholders, students and their success directly affect the accomplishment of organizational goals. This study demonstrates how inquiries about how students navigate their educational experience extends research-based knowledge and provides actionable knowledge for the improvement of organizational operations in UME.Keywords: medical education programs, organizational communication, participatory research, qualitative mixed methods
Procedia PDF Downloads 1151231 Nigerian Media Coverage of the Chibok Girls Kidnap: A Qualitative News Framing Analysis of the Nation Newspaper
Authors: Samuel O. Oduyela
Abstract:
Over the last ten years, many studies have examined the media coverage of terrorism across the world. Nevertheless, most of these studies have been inclined to the western narrative, more so in relation to the international media. This study departs from that partiality to explore the Nigerian press and its coverage of the Boko Haram. The study intends to illustrate how the Nigerian press has reported its homegrown terrorism within its borders. On 14 April 2014, the Shekau-led Boko Haram kidnapped over 200 female students from Chibok in the Borno State. This study analyses a structured sample of news stories, feature articles, editorial comments, and opinions from the Nation newspaper. The study examined the representation of the Chibok girls kidnaps by concentrating on four main viewpoints. The news framing of the Chibok girls’ kidnap under Presidents Goodluck Jonathan (2014) and Mohammadu Buhari (2016-2018), the sourcing model present in the news reporting of the kidnap and the challenges Nation reporters face in reporting Boko Haram. The study adopted the use of qualitative news framing analysis to provide further insights into significant developments established from the examination of news contents. The study found that the news reportage mainly focused on the government response to Chibok girls kidnap, international press and Boko Haram. Boko Haram was also framed, as a political conspiracy, as prevailing, and as instilling fear. Political, and economic influence appeared to be a significant determinant of the reportage. The study found that the Nation newspaper's portrayal of the crisis under President Jonathan differed significantly from under President Buhari. While the newspaper framed the action of President Jonathan as lacklustre, dismissive, and confusing, it was less critical of President Buhari's government's handling of the crisis. The Nation newspaper failed to promote or explore non-violent approaches. News reports of the kidnap, thus, were presented mainly from a political and ethnoreligious perspective. The study also raised questions of what roles should journalists play in covering conflicts? Should they merely report comments on and interpret it, or should they be actors in the resolution or, more importantly, the prevention of conflicts? The study underlined the need for the independence of the media, more training for journalists to advance a more nuanced and conflict-sensitive news coverage in the Nigerian context.Keywords: boko haram, chibok girls kidnap, conflict in nigeria, media framing
Procedia PDF Downloads 1481230 TARF: Web Toolkit for Annotating RNA-Related Genomic Features
Abstract:
Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.Keywords: RNA-related genomic features, annotation, visualization, web server
Procedia PDF Downloads 2081229 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator
Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi
Abstract:
Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.Keywords: equivalent doses, neutron contamination, neutron detector, photon energy
Procedia PDF Downloads 4491228 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 4901227 Evaluation of the Cytotoxicity and Cellular Uptake of a Cyclodextrin-Based Drug Delivery System for Cancer Therapy
Authors: Caroline Mendes, Mary McNamara, Orla Howe
Abstract:
Drug delivery systems are proposed for use in cancer treatment to specifically target cancer cells and deliver a therapeutic dose without affecting normal cells. For that purpose, the use of folate receptors (FR) can be considered a key strategy, since they are commonly over-expressed in cancer cells. In this study, cyclodextrins (CD) have being used as vehicles to target FR and deliver the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within their cavities. Here, β-CD has been modified using folic acid so as to specifically target the FR. Thus, this drug delivery system consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 15.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 16.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 10.5 for A549 and 132.6 µM ± 16.1 and 288.1 µM ± 26.3 for BEAS-2B. These results demonstrate that free MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug. The use of cell imaging by confocal microscopy has allowed visualisation of FR targeting in cancer cells, as well as the identification of the interlisation pathway of the drug. Hence, the cellular uptake and internalisation process of this drug delivery system is being addressed.Keywords: cancer treatment, cyclodextrins, drug delivery, folate receptors, reduced folate carriers
Procedia PDF Downloads 3101226 Geochemical Modeling of Mineralogical Changes in Rock and Concrete in Interaction with Groundwater
Authors: Barbora Svechova, Monika Licbinska
Abstract:
Geochemical modeling of mineralogical changes of various materials in contact with an aqueous solution is an important tool for predicting the processes and development of given materials at the site. The modeling focused on the mutual interaction of groundwater at the contact with the rock mass and its subsequent influence on concrete structures. The studied locality is located in Slovakia in the area of the Liptov Basin, which is a significant inter-mountain lowland, which is bordered on the north and south by the core mountains belt of the Tatras, where in the center the crystalline rises to the surface accompanied by Mesozoic cover. Groundwater in the area is bound to structures with complicated geological structures. From the hydrogeological point of view, it is an environment with a crack-fracture character. The area is characterized by a shallow surface circulation of groundwater without a significant collector structure, and from a chemical point of view, groundwater in the area has been classified as calcium bicarbonate with a high content of CO2 and SO4 ions. According to the European standard EN 206-1, these are waters with medium aggression towards the concrete. Three rock samples were taken from the area. Based on petrographic and mineralogical research, they were evaluated as calcareous shale, micritic limestone and crystalline shale. These three rock samples were placed in demineralized water for one month and the change in the chemical composition of the water was monitored. During the solution-rock interaction there was an increase in the concentrations of all major ions, except nitrates. There was an increase in concentration after a week, but at the end of the experiment, the concentration was lower than the initial value. Another experiment was the interaction of groundwater from the studied locality with a concrete structure. The concrete sample was also left in the water for 1 month. The results of the experiment confirmed the assumption of a reduction in the concentrations of calcium and bicarbonate ions in water due to the precipitation of amorphous forms of CaCO3 on the surface of the sample.Vice versa, it was surprising to increase the concentration of sulphates, sodium, iron and aluminum due to the leaching of concrete. Chemical analyzes from these experiments were performed in the PHREEQc program, which calculated the probability of the formation of amorphous forms of minerals. From the results of chemical analyses and hydrochemical modeling of water collected in situ and water from experiments, it was found: groundwater at the site is unsaturated and shows moderate aggression towards reinforced concrete structures according to EN 206-1a, which will affect the homogeneity and integrity of concrete structures; from the rocks in the given area, Ca, Na, Fe, HCO3 and SO4. Unsaturated waters will dissolve everything as soon as they come into contact with the solid matrix. The speed of this process then depends on the physicochemical parameters of the environment (T, ORP, p, n, water retention time in the environment, etc.).Keywords: geochemical modeling, concrete , dissolution , PHREEQc
Procedia PDF Downloads 1971225 The Principal-Agent Model with Moral Hazard in the Brazilian Innovation System: The Case of 'Lei do Bem'
Authors: Felippe Clemente, Evaldo Henrique da Silva
Abstract:
The need to adopt some type of industrial policy and innovation in Brazil is a recurring theme in the discussion of public interventions aimed at boosting economic growth. For many years, the country has adopted various policies to change its productive structure in order to increase the participation of sectors that would have the greatest potential to generate innovation and economic growth. Only in the 2000s, tax incentives as a policy to support industrial and technological innovation are being adopted in Brazil as a phenomenon associated with rates of productivity growth and economic development. In this context, in late 2004 and 2005, Brazil reformulated its institutional apparatus for innovation in order to approach the OECD conventions and the Frascati Manual. The Innovation Law (2004) and the 'Lei do Bem' (2005) reduced some institutional barriers to innovation, provided incentives for university-business cooperation, and modified access to tax incentives for innovation. Chapter III of the 'Lei do Bem' (no. 11,196/05) is currently the most comprehensive fiscal incentive to stimulate innovation. It complies with the requirements, which stipulates that the Union should encourage innovation in the company or industry by granting tax incentives. With its introduction, the bureaucratic procedure was simplified by not requiring pre-approval of projects or participation in bidding documents. However, preliminary analysis suggests that this instrument has not yet been able to stimulate the sector diversification of these investments in Brazil, since its benefits are mostly captured by sectors that already developed this activity, thus showing problems with moral hazard. It is necessary, then, to analyze the 'Lei do Bem' to know if there is indeed the need for some change, investigating what changes should be implanted in the Brazilian innovation policy. This work, therefore, shows itself as a first effort to analyze a current national problem, evaluating the effectiveness of the 'Lei do Bem' and suggesting public policies that help and direct the State to the elaboration of legislative laws capable of encouraging agents to follow what they describes. As a preliminary result, it is known that 130 firms used fiscal incentives for innovation in 2006, 320 in 2007 and 552 in 2008. Although this number is on the rise, it is still small, if it is considered that there are around 6 thousand firms that perform Research and Development (R&D) activities in Brazil. Moreover, another obstacle to the 'Lei do Bem' is the percentages of tax incentives provided to companies. These percentages reveal a significant sectoral correlation between R&D expenditures of large companies and R&D expenses of companies that accessed the 'Lei do Bem', reaching a correlation of 95.8% in 2008. With these results, it becomes relevant to investigate the law's ability to stimulate private investments in R&D.Keywords: brazilian innovation system, moral hazard, R&D, Lei do Bem
Procedia PDF Downloads 3371224 Application of Forensic Entomology to Estimate the Post Mortem Interval
Authors: Meriem Taleb, Ghania Tail, Fatma Zohra Kara, Brahim Djedouani, T. Moussa
Abstract:
Forensic entomology has grown immensely as a discipline in the past thirty years. The main purpose of forensic entomology is to establish the post mortem interval or PMI. Three days after the death, insect evidence is often the most accurate and sometimes the only method of determining elapsed time since death. This work presents the estimation of the PMI in an experiment to test the reliability of the accumulated degree days (ADD) method and the application of this method in a real case. The study was conducted at the Laboratory of Entomology at the National Institute for Criminalistics and Criminology of the National Gendarmerie, Algeria. The domestic rabbit Oryctolagus cuniculus L. was selected as the animal model. On 08th July 2012, the animal was killed. Larvae were collected and raised to adulthood. Estimation of oviposition time was calculated by summing up average daily temperatures minus minimum development temperature (also specific to each species). When the sum is reached, it corresponds to the oviposition day. Weather data were obtained from the nearest meteorological station. After rearing was accomplished, three species emerged: Lucilia sericata, Chrysomya albiceps, and Sarcophaga africa. For Chrysomya albiceps species, a cumulation of 186°C is necessary. The emergence of adults occured on 22nd July 2012. A value of 193.4°C is reached on 9th August 2012. Lucilia sericata species require a cumulation of 207°C. The emergence of adults occurred on 23rd, July 2012. A value of 211.35°C is reached on 9th August 2012. We should also consider that oviposition may occur more than 12 hours after death. Thus, the obtained PMI is in agreement with the actual time of death. We illustrate the use of this method during the investigation of a case of a decaying human body found on 03rd March 2015 in Bechar, South West of Algerian desert. Maggots were collected and sent to the Laboratory of Entomology. Lucilia sericata adults were identified on 24th March 2015 after emergence. A sum of 211.6°C was reached on 1st March 2015 which corresponds to the estimated day of oviposition. Therefore, the estimated date of death is 1st March 2015 ± 24 hours. The estimated PMI by accumulated degree days (ADD) method seems to be very precise. Entomological evidence should always be used in homicide investigations when the time of death cannot be determined by other methods.Keywords: forensic entomology, accumulated degree days, postmortem interval, diptera, Algeria
Procedia PDF Downloads 2941223 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 591222 Biosensor: An Approach towards Sustainable Environment
Authors: Purnima Dhall, Rita Kumar
Abstract:
Introduction: River Yamuna, in the national capital territory (NCT), and also the primary source of drinking water for the city. Delhi discharges about 3,684 MLD of sewage through its 18 drains in to the Yamuna. Water quality monitoring is an important aspect of water management concerning to the pollution control. Public concern and legislation are now a day’s demanding better environmental control. Conventional method for estimating BOD5 has various drawbacks as they are expensive, time-consuming, and require the use of highly trained personnel. Stringent forthcoming regulations on the wastewater have necessitated the urge to develop analytical system, which contribute to greater process efficiency. Biosensors offer the possibility of real time analysis. Methodology: In the present study, a novel rapid method for the determination of biochemical oxygen demand (BOD) has been developed. Using the developed method, the BOD of a sample can be determined within 2 hours as compared to 3-5 days with the standard BOD3-5day assay. Moreover, the test is based on specified consortia instead of undefined seeding material therefore it minimizes the variability among the results. The device is coupled to software which automatically calculates the dilution required, so, the prior dilution of the sample is not required before BOD estimation. The developed BOD-Biosensor makes use of immobilized microorganisms to sense the biochemical oxygen demand of industrial wastewaters having low–moderate–high biodegradability. The method is quick, robust, online and less time consuming. Findings: The results of extensive testing of the developed biosensor on drains demonstrate that the BOD values obtained by the device correlated with conventional BOD values the observed R2 value was 0.995. The reproducibility of the measurements with the BOD biosensor was within a percentage deviation of ±10%. Advantages of developed BOD biosensor • Determines the water pollution quickly in 2 hours of time; • Determines the water pollution of all types of waste water; • Has prolonged shelf life of more than 400 days; • Enhanced repeatability and reproducibility values; • Elimination of COD estimation. Distinctiveness of Technology: • Bio-component: can determine BOD load of all types of waste water; • Immobilization: increased shelf life > 400 days, extended stability and viability; • Software: Reduces manual errors, reduction in estimation time. Conclusion: BiosensorBOD can be used to measure the BOD value of the real wastewater samples. The BOD biosensor showed good reproducibility in the results. This technology is useful in deciding treatment strategies well ahead and so facilitating discharge of properly treated water to common water bodies. The developed technology has been transferred to M/s Forbes Marshall Pvt Ltd, Pune.Keywords: biosensor, biochemical oxygen demand, immobilized, monitoring, Yamuna
Procedia PDF Downloads 2781221 Curriculum Transformation: Multidisciplinary Perspectives on ‘Decolonisation’ and ‘Africanisation’ of the Curriculum in South Africa’s Higher Education
Authors: Andre Bechuke
Abstract:
The years of 2015-2017 witnessed a huge campaign, and in some instances, violent protests in South Africa by students and some groups of academics advocating the decolonisation of the curriculum of universities. These protests have forced through high expectations for universities to teach a curriculum relevant to the country, and the continent as well as enabled South Africa to participate in the globalised world. To realise this purpose, most universities are currently undertaking steps to transform and decolonise their curriculum. However, the transformation process is challenged and delayed by lack of a collective understanding of the concepts ‘decolonisation’ and ‘africanisation’ that should guide its application. Even more challenging is lack of a contextual understanding of these concepts across different university disciplines. Against this background, and underpinned in a qualitative research paradigm, the perspectives of these concepts as applied by different university disciplines were examined in order to understand and establish their implementation in the curriculum transformation agenda. Data were collected by reviewing the teaching and learning plans of 8 faculties of an institution of higher learning in South Africa and analysed through content and textual analysis. The findings revealed varied understanding and use of these concepts in the transformation of the curriculum across faculties. Decolonisation, according to the faculties of Law and Humanities, is perceived as the eradication of the Eurocentric positioning in curriculum content and the constitutive rules and norms that control thinking. This is not done by ignoring other knowledge traditions but does call for an affirmation and validation of African views of the world and systems of thought, mixing it with current knowledge. For the Faculty of Natural and Agricultural Sciences, decolonisation is seen as making the content of the curriculum relevant to students, fulfilling the needs of industry and equipping students for job opportunities. This means the use of teaching strategies and methods that are inclusive of students from diverse cultures, and to structure the learning experience in ways that are not alien to the cultures of the students. For the Health Sciences, decolonisation of the curriculum refers to the need for a shift in Western thinking towards being more sensitive to all cultural beliefs and thoughts. Collectively, decolonisation of education thus entails that a nation must become independent with regard to the acquisition of knowledge, skills, values, beliefs, and habits. Based on the findings, for universities to successfully transform their curriculum and integrate the concepts of decolonisation and Africanisation, there is a need to contextually determine the meaning of the concepts generally and narrow them down to what they should mean to specific disciplines. Universities should refrain from considering an umbrella approach to these concepts. Decolonisation should be seen as a means and not an end. A decolonised curriculum should equally be developed based on the finest knowledge skills, values, beliefs and habits around the world and not limited to one country or continent.Keywords: Africanisation, curriculum, transformation, decolonisation, multidisciplinary perspectives, South Africa’s higher education
Procedia PDF Downloads 1601220 Growth and Characterization of Cuprous Oxide (Cu2O) Nanorods by Reactive Ion Beam Sputter Deposition (Ibsd) Method
Authors: Assamen Ayalew Ejigu, Liang-Chiun Chao
Abstract:
In recent semiconductor and nanotechnology, quality material synthesis, proper characterizations, and productions are the big challenges. As cuprous oxide (Cu2O) is a promising semiconductor material for photovoltaic (PV) and other optoelectronic applications, this study was aimed at to grow and characterize high quality Cu2O nanorods for the improvement of the efficiencies of thin film solar cells and other potential applications. In this study, well-structured cuprous oxide (Cu2O) nanorods were successfully fabricated using IBSD method in which the Cu2O samples were grown on silicon substrates with a substrate temperature of 400°C in an IBSD chamber of pressure of 4.5 x 10-5 torr using copper as a target material. Argon, and oxygen gases were used as a sputter and reactive gases, respectively. The characterization of the Cu2O nanorods (NRs) were done in comparison with Cu2O thin film (TF) deposited with the same method but with different Ar:O2 flow rates. With Ar:O2 ratio of 9:1 single phase pure polycrystalline Cu2O NRs with diameter of ~500 nm and length of ~4.5 µm were grow. Increasing the oxygen flow rates, pure single phase polycrystalline Cu2O thin film (TF) was found at Ar:O2 ratio of 6:1. The field emission electron microscope (FE-SEM) measurements showed that both samples have smooth morphologies. X-ray diffraction and Rama scattering measurements reveals the presence of single phase Cu2O in both samples. The differences in Raman scattering and photoluminescence (PL) bands of the two samples were also investigated and the results showed us there are differences in intensities, in number of bands and in band positions. Raman characterization shows that the Cu2O NRs sample has pronounced Raman band intensities, higher numbers of Raman bands than the Cu2O TF which has only one second overtone Raman signal at 2 (217 cm-1). The temperature dependent photoluminescence (PL) spectra measurements, showed that the defect luminescent band centered at 720 nm (1.72 eV) is the dominant one for the Cu2O NRs and the 640 nm (1.937 eV) band was the only PL band observed from the Cu2O TF. The difference in optical and structural properties of the samples comes from the oxygen flow rate change in the process window of the samples deposition. This gave us a roadmap for further investigation of the electrical and other optical properties for the tunable fabrication of the Cu2O nano/micro structured sample for the improvement of the efficiencies of thin film solar cells in addition to other potential applications. Finally, the novel morphologies, excellent structural and optical properties seen exhibits the grown Cu2O NRs sample has enough quality to be used in further research of the nano/micro structured semiconductor materials.Keywords: defect levels, nanorods, photoluminescence, Raman modes
Procedia PDF Downloads 2411219 Extraction of Rice Bran Protein Using Enzymes and Polysaccharide Precipitation
Authors: Sudarat Jiamyangyuen, Tipawan Thongsook, Riantong Singanusong, Chanida Saengtubtim
Abstract:
Rice is a staple food as well as exported commodity of Thailand. Rice bran, a 10.5% constituent of rice grain, is a by-product of rice milling process. Rice bran is normally used as a raw material for rice bran oil production or sold as feed with a low price. Therefore, this study aimed to increase value of defatted rice bran as obtained after extracting of rice bran oil. Conventionally, the protein in defatted rice bran was extracted using alkaline extraction and acid precipitation, which results in reduction of nutritious components in rice bran. Rice bran protein concentrate is suitable for those who are allergenic of protein from other sources eg. milk, wheat. In addition to its hypoallergenic property, rice bran protein also contains good quantity of lysine. Thus it may act as a suitable ingredient for infant food formulations while adding variety to the restricted diets of children with food allergies. The objectives of this study were to compare properties of rice bran protein concentrate (RBPC) extracted from defatted rice bran using enzymes together with precipitation step using polysaccharides (alginate and carrageenan) to those of a control sample extracted using a conventional method. The results showed that extraction of protein from rice bran using enzymes exhibited the higher protein recovery compared to that extraction with alkaline. The extraction conditions using alcalase 2% (v/w) at 50 C, pH 9.5 gave the highest protein (2.44%) and yield (32.09%) in extracted solution compared to other enzymes. Rice bran protein concentrate powder prepared by a precipitation step using alginate (protein in solution: alginate 1:0.006) exhibited the highest protein (27.55%) and yield (6.62%). Precipitation using alginate was better than that of acid. RBPC extracted with alkaline (ALK) or enzyme alcalase (ALC), then precipitated with alginate (AL) (samples RBP-ALK-AL and RBP-ALC-AL) yielded the precipitation rate of 75% and 91.30%, respectively. Therefore, protein precipitation using alginate was then selected. Amino acid profile of control sample, and sample precipitated with alginate, as compared to casein and soy protein isolated, showed that control sample showed the highest content among all sample. Functional property study of RBP showed that the highest nitrogen solubility occurred in pH 8-10. There was no statically significant between emulsion capacity and emulsion stability of control and sample precipitated by alginate. However, control sample showed a higher of foaming and lower foam stability compared to those of sample precipitated with alginate. The finding was successful in terms of minimizing chemicals used in extraction and precipitation steps in preparation of rice bran protein concentrate. This research involves in a production of value-added product in which the double amount of protein (28%) compared to original amount (14%) contained in rice bran could be beneficial in terms of adding to food products eg. healthy drink with high protein and fiber. In addition, the basic knowledge of functional property of rice bran protein concentrate was obtained, which can be used to appropriately select the application of this value-added product from rice bran.Keywords: alginate, carrageenan, rice bran, rice bran protein
Procedia PDF Downloads 2951218 Examining the Critical Factors for Success and Failure of Common Ticketing Systems
Authors: Tam Viet Hoang
Abstract:
With a plethora of new mobility services and payment systems found in our cities and across modern public transportation systems, several cities globally have turned to common ticketing systems to help navigate this complexity. Helping to create time and space-differentiated fare structures and tariff schemes, common ticketing systems can optimize transport utilization rates, achieve cost efficiencies, and provide key incentives to specific target groups. However, not all cities and transportation systems have enjoyed a smooth journey towards the adoption, roll-out, and servicing of common ticketing systems, with both the experiences of success and failure being attributed to a wide variety of critical factors. Using case study research as a methodology and cities as the main unit of analysis, this research will seek to address the fundamental question of “what are the critical factors for the success and failure of common ticketing systems?” Using rail/train systems as the entry point for this study will start by providing a background to the evolution of transport ticketing and justify the improvements in operational efficiency that can be achieved through common ticketing systems. Examining the socio-economic benefits of common ticketing, the research will also help to articulate the value derived for different key identified stakeholder groups. By reviewing case studies of the implementation of common ticketing systems in different cities, the research will explore lessons learned from cities with the aim to elicit factors to ensure seamless connectivity integrated e-ticketing platforms. In an increasingly digital age and where cities are now coming online, this paper seeks to unpack these critical factors, undertaking case study research drawing from literature and lived experiences. Offering us a better understanding of the enabling environment and ideal mixture of ingredients to facilitate the successful roll-out of a common ticketing system, interviews will be conducted with transport operators from several selected cities to better appreciate the challenges and strategies employed to overcome those challenges in relation to common ticketing systems. Meanwhile, as we begin to see the introduction of new mobile applications and user interfaces to facilitate ticketing and payment as part of the transport journey, we take stock of numerous policy challenges ahead and implications on city-wide and system-wide urban planning. It is hoped that this study will help to identify the critical factors for the success and failure of common ticketing systems for cities set to embark on their implementation while serving to fine-tune processes in those cities where common ticketing systems are already in place. Outcomes from the study will help to facilitate an improved understanding of common pitfalls and essential milestones towards the roll-out of a common ticketing system for railway systems, especially for emerging countries where mass rapid transit transport systems are being considered or in the process of construction.Keywords: common ticketing, public transport, urban strategies, Bangkok, Fukuoka, Sydney
Procedia PDF Downloads 881217 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index
Procedia PDF Downloads 1571216 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing
Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.
Abstract:
Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target
Procedia PDF Downloads 3301215 Numerical Investigation of Flow Boiling within Micro-Channels in the Slug-Plug Flow Regime
Authors: Anastasios Georgoulas, Manolia Andredaki, Marco Marengo
Abstract:
The present paper investigates the hydrodynamics and heat transfer characteristics of slug-plug flows under saturated flow boiling conditions within circular micro-channels. Numerical simulations are carried out, using an enhanced version of the open-source CFD-based solver ‘interFoam’ of OpenFOAM CFD Toolbox. The proposed user-defined solver is based in the Volume Of Fluid (VOF) method for interface advection, and the mentioned enhancements include the implementation of a smoothing process for spurious current reduction, the coupling with heat transfer and phase change as well as the incorporation of conjugate heat transfer to account for transient solid conduction. In all of the considered cases in the present paper, a single phase simulation is initially conducted until a quasi-steady state is reached with respect to the hydrodynamic and thermal boundary layer development. Then, a predefined and constant frequency of successive vapour bubbles is patched upstream at a certain distance from the channel inlet. The proposed numerical simulation set-up can capture the main hydrodynamic and heat transfer characteristics of slug-plug flow regimes within circular micro-channels. In more detail, the present investigation is focused on exploring the interaction between subsequent vapour slugs with respect to their generation frequency, the hydrodynamic characteristics of the liquid film between the generated vapour slugs and the channel wall as well as of the liquid plug between two subsequent vapour slugs. The proposed investigation is carried out for the 3 different working fluids and three different values of applied heat flux in the heated part of the considered microchannel. The post-processing and analysis of the results indicate that the dynamics of the evolving bubbles in each case are influenced by both the upstream and downstream bubbles in the generated sequence. In each case a slip velocity between the vapour bubbles and the liquid slugs is evident. In most cases interfacial waves appear close to the bubble tail that significantly reduce the liquid film thickness. Finally, in accordance with previous investigations vortices that are identified in the liquid slugs between two subsequent vapour bubbles can significantly enhance the convection heat transfer between the liquid regions and the heated channel walls. The overall results of the present investigation can be used to enhance the present understanding by providing better insight of the complex, underpinned heat transfer mechanisms in saturated boiling within micro-channels in the slug-plug flow regime.Keywords: slug-plug flow regime, micro-channels, VOF method, OpenFOAM
Procedia PDF Downloads 2671214 Carbon Dioxide Capture and Utilization by Using Seawater-Based Industrial Wastewater and Alkanolamine Absorbents
Authors: Dongwoo Kang, Yunsung Yoo, Injun Kim, Jongin Lee, Jinwon Park
Abstract:
Since industrial revolution, energy usage by human-beings has been drastically increased resulting in the enormous emissions of carbon dioxide into the atmosphere. High concentration of carbon dioxide is well recognized as the main reason for the climate change by breaking the heat equilibrium of the earth. In order to decrease the amount of carbon dioxide emission, lots of technologies have been developed. One of the methods is to capture carbon dioxide after combustion process using liquid type absorbents. However, for some nations, captured carbon dioxide cannot be treated and stored properly due to their geological structures. Also, captured carbon dioxide can be leaked out when crust activities are active. Hence, the method to convert carbon dioxide as stable and useful products were developed. It is usually called CCU, that is, Carbon Capture and Utilization. There are several ways to convert carbon dioxide into useful substances. For example, carbon dioxide can be converted and used as fuels such as diesel, plastics, and polymers. However, these types of technologies require lots of energy to make stable carbon dioxide into a reactive one. Hence, converting it into metal carbonates salts have been studied widely. When carbon dioxide is captured by alkanolamine-based liquid absorbents, it exists as ionic forms such as carbonate, carbamate, and bicarbonate. When adequate metal ions are added, metal carbonate salt can be produced by ionic reaction with fast reaction kinetics. However, finding metal sources can be one of the problems for this method to be commercialized. If natural resources such as calcium oxide were used to supply calcium ions, it is not thought to have the economic feasibility to use natural resources to treat carbon dioxide. In this research, high concentrated industrial wastewater produced from refined salt production facility have been used as metal supplying source, especially for calcium cations. To ensure purity of final products, calcium ions were selectively separated in the form of gypsum dihydrate. After that, carbon dioxide is captured using alkanolamine-based absorbents making carbon dioxide into reactive ionic form. And then, high purity calcium carbonate salt was produced. The existence of calcium carbonate was confirmed by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) images. Also, carbon dioxide loading curves for absorption, conversion, and desorption were provided. Also, in order to investigate the possibility of the absorbent reuse, reabsorption experiments were performed either. Produced calcium carbonate as final products is seemed to have potential to be used in various industrial fields including cement and paper making industries and pharmaceutical engineering fields.Keywords: alkanolamine, calcium carbonate, climate change, seawater, industrial wastewater
Procedia PDF Downloads 185