Search results for: learning and teaching environment
911 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition
Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova
Abstract:
This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy
Procedia PDF Downloads 50910 Constructing and Circulating Knowledge in Continuous Education: A Study of Norwegian Educational-Psychological Counsellors' Reflection Logs in Post-Graduate Education
Authors: Moen Torill, Rismark Marit, Astrid M. Solvberg
Abstract:
In Norway, every municipality shall provide an educational psychological service, EPS, to support kindergartens and schools in their work with children and youths with special needs. The EPS focus its work on individuals, aiming to identify special needs and to give advice to teachers and parents when they ask for it. In addition, the service also give priority to prevention and system intervention in kindergartens and schools. To master these big tasks university courses are established to support EPS counsellors' continuous learning. There is, however, a need for more in-depth and systematic knowledge on how they experience the courses they attend. In this study, EPS counsellors’ reflection logs during a particular course are investigated. The research question is: what are the content and priorities of the reflections that are communicated in the logs produced by the educational psychological counsellors during a post-graduate course? The investigated course is a credit course organized over a one-year period in two one-semester modules. The altogether 55 students enrolled in the course work as EPS counsellors in various municipalities across Norway. At the end of each day throughout the course period, the participants wrote reflection logs about what they had experienced during the day. The data material consists of 165 pages of typed text. The collaborating researchers studied the data material to ascertain, differentiate and understand the meaning of the content in each log. The analysis also involved the search for similarity in content and development of analytical categories that described the focus and primary concerns in each of the written logs. This involved constant 'critical and sustained discussions' for mutual construction of meaning between the co-researchers in the developing categories. The process is inspired by Grounded Theory. This means that the concepts developed during the analysis derived from the data material and not chosen prior to the investigation. The analysis revealed that the concept 'Useful' frequently appeared in the participants’ reflections and, as such, 'Useful' serves as a core category. The core category is described through three major categories: (1) knowledge sharing (concerning direct and indirect work with students with special needs) with colleagues is useful, (2) reflections on models and theoretical concepts (concerning students with special needs) are useful, (3) reflection on the role as EPS counsellor is useful. In all the categories, the notion of useful occurs in the participants’ emphasis on and acknowledgement of the immediate and direct link between the university course content and their daily work practice. Even if each category has an importance and value of its own, it is crucial that they are understood in connection with one another and as interwoven. It is the connectedness that gives the core category an overarching explanatory power. The knowledge from this study may be a relevant contribution when it comes to designing new courses that support continuing professional development for EPS counsellors, whether for post-graduate university courses or local courses at the EPS offices or whether in Norway or other countries in the world.Keywords: constructing and circulating knowledge, educational-psychological counsellor, higher education, professional development
Procedia PDF Downloads 115909 Ethicality of Algorithmic Pricing and Consumers’ Resistance
Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos
Abstract:
Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality
Procedia PDF Downloads 91908 Renewable Natural Gas Production from Biomass and Applications in Industry
Authors: Sarah Alamolhoda, Kevin J. Smith, Xiaotao Bi, Naoko Ellis
Abstract:
For millennials, biomass has been the most important source of fuel used to produce energy. Energy derived from biomass is renewable by re-growth of biomass. Various technologies are used to convert biomass to potential renewable products including combustion, gasification, pyrolysis and fermentation. Gasification is the incomplete combustion of biomass in a controlled environment that results in valuable products such as syngas, biooil and biochar. Syngas is a combustible gas consisting of hydrogen (H₂), carbon monoxide (CO), carbon dioxide (CO₂), and traces of methane (CH₄) and nitrogen (N₂). Cleaned syngas can be used as a turbine fuel to generate electricity, raw material for hydrogen and synthetic natural gas production, or as the anode gas of solid oxide fuel cells. In this work, syngas as a product of woody biomass gasification in British Columbia, Canada, was introduced to two consecutive fixed bed reactors to perform a catalytic water gas shift reaction followed by a catalytic methanation reaction. The water gas shift reaction is a well-established industrial process and used to increase the hydrogen content of the syngas before the methanation process. Catalysts were used in the process since both reactions are reversible exothermic, and thermodynamically preferred at lower temperatures while kinetically favored at elevated temperatures. The water gas shift reactor and the methanation reactor were packed with Cu-based catalyst and Ni-based catalyst, respectively. Simulated syngas with different percentages of CO, H₂, CH₄, and CO₂ were fed to the reactors to investigate the effect of operating conditions in the unit. The water gas shift reaction experiments were done in the temperature of 150 ˚C to 200 ˚C, and the pressure of 550 kPa to 830 kPa. Similarly, methanation experiments were run in the temperature of 300 ˚C to 400 ˚C, and the pressure of 2340 kPa to 3450 kPa. The Methanation reaction reached 98% of CO conversion at 340 ˚C and 3450 kPa, in which more than half of CO was converted to CH₄. Increasing the reaction temperature caused reduction in the CO conversion and increase in the CH₄ selectivity. The process was designed to be renewable and release low greenhouse gas emissions. Syngas is a clean burning fuel, however by going through water gas shift reaction, toxic CO was removed, and hydrogen as a green fuel was produced. Moreover, in the methanation process, the syngas energy was transformed to a fuel with higher energy density (per volume) leading to reduction in the amount of required fuel that flows through the equipment and improvement in the process efficiency. Natural gas is about 3.5 times more efficient (energy/ volume) than hydrogen and easier to store and transport. When modification of existing infrastructure is not practical, the partial conversion of renewable hydrogen to natural gas (with up to 15% hydrogen content), the efficiency would be preserved while greenhouse gas emission footprint is eliminated.Keywords: renewable natural gas, methane, hydrogen, gasification, syngas, catalysis, fuel
Procedia PDF Downloads 118907 Experimental Uniaxial Tensile Characterization of One-Dimensional Nickel Nanowires
Authors: Ram Mohan, Mahendran Samykano, Shyam Aravamudhan
Abstract:
Metallic nanowires with sub-micron and hundreds of nanometer diameter have a diversity of applications in nano/micro-electromechanical systems (NEMS/MEMS). Characterizing the mechanical properties of such sub-micron and nano-scale metallic nanowires are tedious; require sophisticated and careful experimentation to be performed within high-powered microscopy systems (scanning electron microscope (SEM), atomic force microscope (AFM)). Also, needed are nanoscale devices for placing the nanowires; loading them with the intended conditions; obtaining the data for load–deflection during the deformation within the high-powered microscopy environment poses significant challenges. Even picking the grown nanowires and placing them correctly within a nanoscale loading device is not an easy task. Mechanical characterizations through experimental methods for such nanowires are still very limited. Various techniques at different levels of fidelity, resolution, and induced errors have been attempted by material science and nanomaterial researchers. The methods for determining the load, deflection within the nanoscale devices also pose a significant problem. The state of the art is thus still at its infancy. All these factors result and is seen in the wide differences in the characterization curves and the reported properties in the current literature. In this paper, we discuss and present our experimental method, results, and discussions of uniaxial tensile loading and the development of subsequent stress–strain characteristics curves for Nickel nanowires. Nickel nanowires in the diameter range of 220–270 nm were obtained in our laboratory via an electrodeposition method, which is a solution based, template method followed in our present work for growing 1-D Nickel nanowires. Process variables such as the presence of magnetic field, its intensity; and varying electrical current density during the electrodeposition process were found to influence the morphological and physical characteristics including crystal orientation, size of the grown nanowires1. To further understand the correlation and influence of electrodeposition process variables, associated formed structural features of our grown Nickel nanowires to their mechanical properties, careful experiments within scanning electron microscope (SEM) were conducted. Details of the uniaxial tensile characterization, testing methodology, nanoscale testing device, load–deflection characteristics, microscopy images of failure progression, and the subsequent stress–strain curves are discussed and presented.Keywords: uniaxial tensile characterization, nanowires, electrodeposition, stress-strain, nickel
Procedia PDF Downloads 406906 Electromagnetic-Mechanical Stimulation on PC12 for Enhancement of Nerve Axonal Extension
Authors: E. Nakamachi, K. Matsumoto, K. Yamamoto, Y. Morita, H. Sakamoto
Abstract:
In recently, electromagnetic and mechanical stimulations have been recognized as the effective extracellular environment stimulation technique to enhance the defected peripheral nerve tissue regeneration. In this study, we developed a new hybrid bioreactor by adopting 50 Hz uniform alternative current (AC) magnetic stimulation and 4% strain mechanical stimulation. The guide tube for nerve regeneration is mesh structured tube made of biodegradable polymer, such as polylatic acid (PLA). However, when neural damage is large, there is a possibility that peripheral nerve undergoes necrosis. So it is quite important to accelerate the nerve tissue regeneration by achieving enhancement of nerve axonal extension rate. Therefore, we try to design and fabricate the system that can simultaneously load the uniform AC magnetic field stimulation and the stretch stimulation to cells for enhancement of nerve axonal extension. Next, we evaluated systems performance and the effectiveness of each stimulation for rat adrenal pheochromocytoma cells (PC12). First, we designed and fabricated the uniform AC magnetic field system and the stretch stimulation system. For the AC magnetic stimulation system, we focused on the use of pole piece structure to carry out in-situ microscopic observation. We designed an optimum pole piece structure using the magnetic field finite element analyses and the response surface methodology. We fabricated the uniform AC magnetic field stimulation system as a bio-reactor by adopting analytically determined design specifications. We measured magnetic flux density that is generated by the uniform AC magnetic field stimulation system. We confirmed that measurement values show good agreement with analytical results, where the uniform magnetic field was observed. Second, we fabricated the cyclic stretch stimulation device under the conditions of particular strains, where the chamber was made of polyoxymethylene (POM). We measured strains in the PC12 cell culture region to confirm the uniform strain. We found slightly different values from the target strain. Finally, we concluded that these differences were allowable in this mechanical stimulation system. We evaluated the effectiveness of each stimulation to enhance the nerve axonal extension using PC12. We confirmed that the average axonal extension length of PC12 under the uniform AC magnetic stimulation was increased by 16 % at 96 h in our bio-reactor. We could not confirm that the axonal extension enhancement under the stretch stimulation condition, where we found the exfoliating of cells. Further, the hybrid stimulation enhanced the axonal extension. Because the magnetic stimulation inhibits the exfoliating of cells. Finally, we concluded that the enhancement of PC12 axonal extension is due to the magnetic stimulation rather than the mechanical stimulation. Finally, we confirmed that the effectiveness of the uniform AC magnetic field stimulation for the nerve axonal extension using PC12 cells.Keywords: nerve cell PC12, axonal extension, nerve regeneration, electromagnetic-mechanical stimulation, bioreactor
Procedia PDF Downloads 265905 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 170904 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 39903 Adaptive Environmental Control System Strategy for Cabin Air Quality in Commercial Aircrafts
Authors: Paolo Grasso, Sai Kalyan Yelike, Federico Benzi, Mathieu Le Cam
Abstract:
The cabin air quality (CAQ) in commercial aircraft is of prime interest, especially in the context of the COVID-19 pandemic. Current Environmental Control Systems (ECS) rely on a prescribed fresh airflow per passenger to dilute contaminants. An adaptive ECS strategy is proposed, leveraging air sensing and filtration technologies to ensure a better CAQ. This paper investigates the CAQ level achieved in commercial aircraft’s cabin during various flight scenarios. The modeling and simulation analysis is performed in a Modelica-based environment describing the dynamic behavior of the system. The model includes the following three main systems: cabin, recirculation loop and air-conditioning pack. The cabin model evaluates the thermo-hygrometric conditions and the air quality in the cabin depending on the number of passengers and crew members, the outdoor conditions and the conditions of the air supplied to the cabin. The recirculation loop includes models of the recirculation fan, ordinary and novel filtration technology, mixing chamber and outflow valve. The air-conditioning pack includes models of heat exchangers and turbomachinery needed to condition the hot pressurized air bled from the engine, as well as selected contaminants originated from the outside or bled from the engine. Different ventilation control strategies are modeled and simulated. Currently, a limited understanding of contaminant concentrations in the cabin and the lack of standardized and systematic methods to collect and record data constitute a challenge in establishing a causal relationship between CAQ and passengers' comfort. As a result, contaminants are neither measured nor filtered during flight, and the current sub-optimal way to avoid their accumulation is their dilution with the fresh air flow. However, the use of a prescribed amount of fresh air comes with a cost, making the ECS the most energy-demanding non-propulsive system within an aircraft. In such a context, this study shows that an ECS based on a reduced and adaptive fresh air flow, and relying on air sensing and filtration technologies, provides promising results in terms of CAQ control. The comparative simulation results demonstrate that the proposed adaptive ECS brings substantial improvements to the CAQ in terms of both controlling the asymptotic values of the concentration of the contaminant and in mitigating hazardous scenarios, such as fume events. Original architectures allowing for adaptive control of the inlet air flow rate based on monitored CAQ will change the requirements for filtration systems and redefine the ECS operation.Keywords: cabin air quality, commercial aircraft, environmental control system, ventilation
Procedia PDF Downloads 101902 A Qualitative Assessment of the Internal Communication of the College of Comunication: Basis for a Strategic Communication Plan
Authors: Edna T. Bernabe, Joshua Bilolo, Sheila Mae Artillero, Catlicia Joy Caseda, Liezel Once, Donne Ynah Grace Quirante
Abstract:
Internal communication is significant for an organization to function to its full extent. A strategic communication plan builds an organization’s structure and makes it more systematic. Information is a vital part of communication inside the organization as this lays every possible outcome—be it positive or negative. It is, therefore, imperative to assess the communication structure of a particular organization to secure a better and harmonious communication environment in any organization. Thus, this research was intended to identify the internal communication channels used in Polytechnic University of the Philippines-College of Communication (PUP-COC) as an organization, to identify the flow of information specifically in downward, upward, and horizontal communication, to assess the accuracy, consistency, and timeliness of its internal communication channels; and to come up with a proposed strategic communication plan of information dissemination to improve the existing communication flow in the college. The researchers formulated a framework from Input-Throughout-Output-Feedback-Goal of General System Theory and gathered data to assess the PUP-COC’s internal communication. The communication model links the objectives of the study to know the internal organization of the college. The qualitative approach and case study as the tradition of inquiry were used to gather deeper understanding of the internal organizational communication in PUP-COC, using Interview, as the primary methods for the study. This was supported with a quantitative data which were gathered through survey from the students of the college. The researchers interviewed 17 participants: the College dean, the 4 chairpersons of the college departments, the 11 faculty members and staff, and the acting Student Council president. An interview guide and a standardized questionnaire were formulated as instruments to generate the data. After a thorough analysis of the study, it was found out that two-way communication flow exists in PUP-COC. The type of communication channel the internal stakeholders use varies as to whom a particular person is communicating with. The members of the PUP-COC community also use different types of communication channels depending on the flow of communication being used. Moreover, the most common types of internal communication are the letters and memoranda for downward communication, while letters, text messages, and interpersonal communication are often used in upward communication. Various forms of social media have been found out to be of use in horizontal communication. Accuracy, consistency, and timeliness play a significant role in information dissemination within the college. However, some problems have also been found out in the communication system. The most common problem are the delay in the dissemination of memoranda and letters and the uneven distribution of information and instruction to faculty, staff, and students. This has led the researchers to formulate a strategic communication plan which aims to propose strategies that will solve the communication problems that are being experienced by the internal stakeholders.Keywords: communication plan, downward communication, internal communication, upward communication
Procedia PDF Downloads 518901 Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5
Abstract:
Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is the generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups, and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contains four obvious stages, and the main decomposition reaction occurred in the range of 200-600°C. The Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2, and 3 were in the range of 6.67-20.37 kJ for SS; 1.51-6.87 kJ for HZSM5; and 2.29-9.17 kJ for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1, and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with AC and HZSM5 were in the total range of C4-C17, with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds, while with the presence of AC and HZSM5 dropped to 13.02% and 7.3%, respectively. Meanwhile, the generation of benzene, toluene, and xylene (BTX) compounds was significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR, and TGA techniques. Overall, this research demonstrated AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.Keywords: catalytic pyrolysis, sewage sludge, activated char, HZSM5, bio-oil
Procedia PDF Downloads 179900 Five Years Analysis and Mitigation Plans on Adjustment Orders Impacts on Projects in Kuwait's Oil and Gas Sector
Authors: Rawan K. Al-Duaij, Salem A. Al-Salem
Abstract:
Projects, the unique and temporary process of achieving a set of requirements have always been challenging; Planning the schedule and budget, managing the resources and risks are mostly driven by a similar past experience or the technical consultations of experts in the matter. With that complexity of Projects in Scope, Time, and execution environment, Adjustment Orders are tools to reflect changes to the original project parameters after Contract signature. Adjustment Orders are the official/legal amendments to the terms and conditions of a live Contract. Reasons for issuing Adjustment Orders arise from changes in Contract scope, technical requirement and specification resulting in scope addition, deletion, or alteration. It can be as well a combination of most of these parameters resulting in an increase or decrease in time and/or cost. Most business leaders (handling projects in the interest of the owner) refrain from using Adjustment Orders considering their main objectives of staying within budget and on schedule. Success in managing the changes results in uninterrupted execution and agreed project costs as well as schedule. Nevertheless, this is not always practically achievable. In this paper, a detailed study through utilizing Industrial Engineering & Systems Management tools such as Six Sigma, Data Analysis, and Quality Control were implemented on the organization’s five years records of the issued Adjustment Orders in order to investigate their prevalence, and time and cost impact. The analysis outcome revealed and helped to identify and categorize the predominant causations with the highest impacts, which were considered most in recommending the corrective measures to reach the objective of minimizing the Adjustment Orders impacts. Data analysis demonstrated no specific trend in the AO frequency in past five years; however, time impact is more than the cost impact. Although Adjustment Orders might never be avoidable; this analysis offers’ some insight to the procedural gaps, and where it is highly impacting the organization. Possible solutions are concluded such as improving project handling team’s coordination and communication, utilizing a blanket service contract, and modifying the projects gate system procedures to minimize the possibility of having similar struggles in future. Projects in the Oil and Gas sector are always evolving and demand a certain amount of flexibility to sustain the goals of the field. As it will be demonstrated, the uncertainty of project parameters, in adequate project definition, operational constraints and stringent procedures are main factors resulting in the need for Adjustment Orders and accordingly the recommendation will be to address that challenge.Keywords: adjustment orders, data analysis, oil and gas sector, systems management
Procedia PDF Downloads 164899 Precursor Muscle Cell’s Phenotype under Compression in a Biomimetic Mechanical Niche
Authors: Fatemeh Abbasi, Arne Hofemeier, Timo Betz
Abstract:
Muscle growth and regeneration critically depend on satellite cells (SCs) which are muscle stem cells located between the basal lamina and myofibres. Upon damage, SCs become activated, enter the cell cycle, and give rise to myoblasts that form new myofibres, while a sub-population self-renew and re-populate the muscle stem cell niche. In aged muscle as well as in certain muscle diseases such as muscular dystrophy, some of the SCs lose their regenerative ability. Although it is demonstrated that the chemical composition of SCs quiescent niche is different from the activated niche, the mechanism initially activated in the SCs remains unknown. While extensive research efforts focused on potential chemical activation, no such factor has been identified to the author’s best knowledge. However, it is substantiated that niche mechanics affects SCs behaviors, such as stemness and engraftment. We hypothesize that mechanical stress in the healthy niche (homeostasis) is different from the regenerative niche and that this difference could serve as an early signal activating SCs upon fiber damage. To investigate this hypothesis, we develop a biomimetic system to reconstitute both, the mechanical and the chemical environment of the SC niche. Cells will be confined between two elastic polyacrylamide (PAA) hydrogels with controlled elastic moduli and functionalized surface chemistry. By controlling the distance between the PAA hydrogel surfaces, we vary the compression forces exerted by the substrates on the cells, while the lateral displacement of the upper hydrogel will create controlled shear forces. To establish such a system, a simplified system is presented. We engineered a sandwich-like configuration of two elastic PAA layer with stiffnesses between 1 and 10 kPa and confined a precursor myoblast cell line (C2C12) in between these layers. Our initial observations in this sandwich model indicate that C2C12 cells show different behaviors under mechanical compression if compared to a control one-layer gel without compression. Interestingly, this behavior is stiffness-dependent. While the shape of C2C12 cells in the sandwich consisting of two stiff (10 kPa) layers was much more elongated, showing almost a neuronal phenotype, the cell shape in a sandwich situation consisting of one stiff and one soft (1 kPa) layer was more spherical. Surprisingly, even in proliferation medium and at very low cell density, the sandwich situation stimulated cell differentiation with increased striation and myofibre formation. Such behavior is commonly found for confluent cells in differentiation medium. These results suggest that mechanical changes in stiffness and applied pressure might be a relevant stimulation for changes in muscle cell behavior.Keywords: C2C12 cells, compression, force, satellite cells, skeletal muscle
Procedia PDF Downloads 124898 Evaluation of Arsenic Removal in Synthetic Solutions and Natural Waters by Rhizofiltration
Authors: P. Barreto, A. Guevara, V. Ibujes
Abstract:
In this study, the removal of arsenic from synthetic solutions and natural water from Papallacta Lagoon was evaluated, by using the rhizofiltration method with terrestrial and aquatic plant species. Ecuador is a country of high volcanic activity, that is why most of water sources come from volcanic glaciers. Therefore, it is necessary to find new, affordable and effective methods for treating water. The water from Papallacta Lagoon shows levels from 327 µg/L to 803 µg/L of arsenic. The evaluation for the removal of arsenic began with the selection of 16 different species of terrestrial and aquatic plants. These plants were immersed to solutions of 4500 µg/L arsenic concentration, for 48 hours. Subsequently, 3 terrestrial species and 2 aquatic species were selected based on the highest amount of absorbed arsenic they showed, analyzed by plasma optical emission spectrometry (ICP-OES), and their best capacity for adaptation into the arsenic solution. The chosen terrestrial species were cultivated from their seed with hydroponics methods, using coconut fiber and polyurethane foam as substrates. Afterwards, the species that best adapted to hydroponic environment were selected. Additionally, a control of the development for the selected aquatic species was carried out using a basic nutrient solution to provide the nutrients that the plants required. Following this procedure, 30 plants from the 3 types of species selected were exposed to a synthetic solution with levels of arsenic concentration of 154, 375 and 874 µg/L, for 15 days. Finally, the plant that showed the highest level of arsenic absorption was placed in 3 L of natural water, with arsenic levels of 803 µg/L. The plant laid in the water until it reached the desired level of arsenic of 10 µg/L. This experiment was carried out in a total of 30 days, in which the capacity of arsenic absorption of the plant was measured. As a result, the five species initially selected to be used in the last part of the evaluation were: sunflower (Helianthus annuus), clover (Trifolium), blue grass (Poa pratensis), water hyacinth (Eichhornia crassipes) and miniature aquatic fern (Azolla). The best result of arsenic removal was showed by the water hyacinth with a 53,7% of absorption, followed by the blue grass with 31,3% of absorption. On the other hand, the blue grass was the plant that best responded to the hydroponic cultivation, by obtaining a germination percentage of 97% and achieving its full growth in two months. Thus, it was the only terrestrial species selected. In summary, the final selected species were blue grass, water hyacinth and miniature aquatic fern. These three species were evaluated by immersing them in synthetic solutions with three different arsenic concentrations (154, 375 and 874 µg/L). Out of the three plants, the water hyacinth was the one that showed the highest percentages of arsenic removal with 98, 58 and 64%, for each one of the arsenic solutions. Finally, 12 plants of water hyacinth were chosen to reach an arsenic level up to 10 µg/L in natural water. This significant arsenic concentration reduction was obtained in 5 days. In conclusion, it was found that water hyacinth is the best plant to reduce arsenic levels in natural water.Keywords: arsenic, natural water, plant species, rhizofiltration, synthetic solutions
Procedia PDF Downloads 123897 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion
Authors: Roberta Martino, Viviana Ventre
Abstract:
From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.Keywords: impatience, risk aversion, subjective probability, uncertainty
Procedia PDF Downloads 107896 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil
Authors: Ana Julia C. Kfouri
Abstract:
A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort
Procedia PDF Downloads 387895 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation
Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang
Abstract:
The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation
Procedia PDF Downloads 149894 The Use of Video Conferencing to Aid the Decision in Whether Vulnerable Patients Should Attend In-Person Appointments during a COVID Pandemic
Authors: Nadia Arikat, Katharine Blain
Abstract:
During the worst of the COVID pandemic, only essential treatment was provided for patients needing urgent care. With the prolonged extent of the pandemic, there has been a return to more routine referrals for paediatric dentistry advice and treatment for specialist conditions. However, some of these patients and/or their carers may have significant medical issues meaning that attending in-person appointments carries additional risks. This poses an ethical dilemma for clinicians. This project looks at how a secure video conferencing platform (“Near Me”) has been used to assess the need and urgency for in-person new patient visits, particularly for patients and families with additional risks. “Near Me” is a secure online video consulting service used by NHS Scotland. In deciding whether to bring a new patient to the hospital for an appointment, the clinical condition of the teeth together with the urgency for treatment need to be assessed. This is not always apparent from the referral letter. In addition, it is important to judge the risks to the patients and carers of such visits, particularly if they have medical issues. The use and effectiveness of “Near Me” consultations to help decide whether vulnerable paediatric patients should have in-person appointments will be illustrated and discussed using two families: one where the child is medically compromised (Alagille syndrome with previous liver transplant), and the other where there is a medically compromised parent (undergoing chemotherapy and a bone marrow transplant). In both cases, it was necessary to take into consideration the risks and moral implications of requesting that they attend the dental hospital during a pandemic. The option of remote consultation allowed further clinical information to be evaluated and the families take part in the decision-making process about whether and when such visits should be scheduled. These cases will demonstrate how medically compromised patients (or patients with vulnerable carers), could have their dental needs assessed in a socially distanced manner by video consultation. Together, the clinician and the patient’s family can weigh up the risks, with regards to COVID-19, of attending for in-person appointments against the benefit of having treatment. This is particularly important for new paediatric patients who have not yet had a formal assessment. The limitations of this technology will also be discussed. It is limited by internet availability, the strength of the connection, the video quality and families owning a device which allows video calls. For those from a lower socio-economic background or living in some rural areas, this may not be possible or limit its usefulness. For the two patients discussed in this project, where the urgency of their dental condition was unclear, video consultation proved beneficial in deciding an appropriate outcome and preventing unnecessary exposure of vulnerable people to a hospital environment during a pandemic, demonstrating the usefulness of such technology when it is used appropriately.Keywords: COVID-19, paediatrics, triage, video consultations
Procedia PDF Downloads 98893 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng
Abstract:
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea
Procedia PDF Downloads 172892 Degradation Kinetics of Cardiovascular Implants Employing Full Blood and Extra-Corporeal Circulation Principles: Mimicking the Human Circulation In vitro
Authors: Sara R. Knigge, Sugat R. Tuladhar, Hans-Klaus HöFfler, Tobias Schilling, Tim Kaufeld, Axel Haverich
Abstract:
Tissue engineered (TE) heart valves based on degradable electrospun fiber scaffold represent a promising approach to overcome the known limitations of mechanical or biological prostheses. But the mechanical stress in the high-pressure system of the human circulation is a severe challenge for the delicate materials. Hence, the prediction of the scaffolds` in vivo degradation kinetics must be as accurate as possible to prevent fatal events in future animal or even clinical trials. Therefore, this study investigates whether long-term testing in full blood provides more meaningful results regarding the degradation behavior than conventional tests in simulated body fluids (SBF) or Phosphate Buffered Saline (PBS). Fiber mats were produced from a polycaprolactone (PCL)/tetrafluoroethylene solution by electrospinning. The morphology of the fiber mats was characterized via scanning electron microscopy (SEM). A maximum physiological degradation environment utilizing a test set-up with porcine full blood was established. The set-up consists of a reaction vessel, an oxygenator unit, and a roller pump. The blood parameters (pO2, pCO2, temperature, and pH) were monitored with an online test system. All tests were also carried out in the test circuit with SBF and PBS to compare conventional degradation media with the novel full blood setting. The polymer's degradation is quantified by SEM picture analysis, differential scanning calorimetry (DSC), and Raman spectroscopy. Tensile and cyclic loading tests were performed to evaluate the mechanical integrity of the scaffold. Preliminary results indicate that PCL degraded slower in full blood than in SBF and PBS. The uptake of water is more pronounced in the full blood group. Also, PCL preserved its mechanical integrity longer when degraded in full blood. Protein absorption increased during the degradation process. Red blood cells, platelets, and their aggregates adhered on the PCL. Presumably, the degradation led to a more hydrophilic polymeric surface which promoted the protein adsorption and the blood cell adhesion. Testing degradable implants in full blood allows for developing more reliable scaffold materials in the future. Material tests in small and large animal trials thereby can be focused on testing candidates that have proven to function well in an in-vivo-like setting.Keywords: Electrospun scaffold, full blood degradation test, long-term polymer degradation, tissue engineered aortic heart valve
Procedia PDF Downloads 150891 Challenges to Safe and Effective Prescription Writing in the Environment Where Digital Prescribing is Absent
Authors: Prashant Neupane, Asmi Pandey, Mumna Ehsan, Katie Davies, Richard Lowsby
Abstract:
Introduction/Background & aims: Safe and effective prescribing in hospitals, directly and indirectly, impacts the health of the patients. Even though digital prescribing in the National Health Service (NHS), UK has been used in lots of tertiary centers along with district general hospitals, a significant number of NHS trusts are still using paper prescribing. We came across lots of irregularities in our daily clinical practice when we are doing paper prescribing. The main aim of the study was to assess how safely and effectively are we prescribing at our hospital where there is no access to digital prescribing. Method/Summary of work: We conducted a prospective audit in the critical care department at Mid Cheshire Hopsitals NHS Foundation Trust in which 20 prescription charts from different patients were randomly selected over a period of 1 month. We assessed 16 multiple categories from each prescription chart and compared them to the standard trust guidelines on prescription. Results/Discussion: We collected data from 20 different prescription charts. 16 categories were evaluated within each prescription chart. The results showed there was an urgent need for improvement in 8 different sections. In 85% of the prescription chart, all the prescribers who prescribed the medications were not identified. Name, GMC number and signature were absent in the required prescriber identification section of the prescription chart. In 70% of prescription charts, either indication or review date of the antimicrobials was absent. Units of medication were not documented correctly in 65% and the allergic status of the patient was absent in 30% of the charts. The start date of medications was missing and alternations of the medications were not done properly in 35%of charts. The patient's name was not recorded in all desired sections of the chart in 50% of cases and cancellations of the medication were not done properly in 45% of the prescription charts. Conclusion(s): From the audit and data analysis, we assessed the areas in which we needed improvement in prescription writing in the Critical care department. However, during the meetings and conversations with the experts from the pharmacy department, we realized this audit is just a representation of the specialized department of the hospital where access to prescribing is limited to a certain number of prescribers. But if we consider bigger departments of the hospital where patient turnover is much more, the results could be much worse. The findings were discussed in the Critical care MDT meeting where suggestions regarding digital/electronic prescribing were discussed. A poster and presentation regarding safe and effective prescribing were done, awareness poster was prepared and attached alongside every bedside in critical care where it is visible to prescribers. We consider this as a temporary measure to improve the quality of prescribing, however, we strongly believe digital prescribing will help to a greater extent to control weak areas which are seen in paper prescribing.Keywords: safe prescribing, NHS, digital prescribing, prescription chart
Procedia PDF Downloads 120890 Gender-Transformative Education: A Pathway to Nourishing and Evolving Gender Equality in the Higher Education of Iran
Authors: Sepideh Mirzaee
Abstract:
Gender-transformative (G-TE) education is a challenging concept in the field of education and it is a matter of hot debate in the contemporary world. Paulo Freire as the prominent advocate of transformative education considers it as an alternative to conventional banking model of education. Besides, a more inclusive concept has been introduced, namely, G-TE, as an unbiased education fostering an environment of gender justice. As its main tenet, G-TE eliminates obstacles to education and improves social shifts. A plethora of contemporary research indicates that G-TE could completely revolutionize education systems by displacing inequalities and changing gender stereotypes. Despite significant progress in female education and its effects on gender equality in Iran, challenges persist. There are some deficiencies regarding gender disparities in the society and, education, specifically. As an example, the number of women with university degrees is on the rise; thus, there will be an increasing demand for employment in the society by them. Instead, many job opportunities remain occupied by men and it is seen as intolerable for the society to assign such occupations to women. In fact, Iran is regarded as a patriarchal society where educational contexts can play a critical role to assign gender ideology to its learners. Thus, such gender ideologies in the education can become the prevailing ideologies in the entire society. Therefore, improving education in this regard, can lead to a significant change in a society subsequently influencing the status of women not only within their own country but also on a global scale. Notably, higher education plays a vital role in this empowerment and social change. Particularly higher education can have a crucial part in imparting gender neutral ideologies to its learners and bringing about substantial change. It has the potential to alleviate the detrimental effects of gender inequalities. Therefore, this study aims to conceptualize the pivotal role of G-TE and its potential power in developing gender equality within the higher educational system of Iran presented within a theoretical framework. The study emphasizes the necessity of stablishing a theoretical grounding for citizenship, and transformative education while distinguishing gender related issues including gender equality, equity and parity. This theoretical foundation will shed lights on the decisions made by policy-makers, syllabus designers, material developers, and specifically professors and students. By doing so, they will be able to promote and implement gender equality recognizing the determinants, obstacles, and consequences of sustaining gender-transformative approaches in their classes within the Iranian higher education system. The expected outcomes include the eradication of gender inequality, transformation of gender stereotypes and provision of equal opportunities for both males and females in education.Keywords: citizenship education, gender inequality, higher education, patriarchal society, transformative education
Procedia PDF Downloads 65889 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 80888 Detailed Ichnofacies and Sedimentological Analysis of the Cambrian Succession (Tal Group) of the Nigalidhar Syncline, Lesser Himalaya, India and the Interpretation of Its Palaeoenvironment
Authors: C. A. Sharma, Birendra P. Singh
Abstract:
Ichnofacies analysis is considered the best paleontological tool for interpreting ancient depositional environments. Nineteen (19) ichnogenera (namely: Bergaueria, Catenichnus, Cochlichnus, Cruziana, Diplichnites, Dimorphichnus, Diplocraterion, Gordia, Guanshanichnus, Lockeia, Merostomichnites, Monomorphichnus, Palaeophycus, Phycodes, Planolites, Psammichnites, Rusophycus, Skolithos and Treptichnus) are recocered from the Tal Group (Cambrian) of the Nigalidhar Syncline. The stratigraphic occurrences of these ichnogenera represent alternating proximal Cruziana and Skolithos ichnofacies along the contact of Sankholi and Koti-Dhaman formations of the Tal Group. Five ichnogenera namely Catenichnus, Guanshanichnus, Lockeia, Merostomichnites and Psammichnites are recorded for the first time from the Nigalidhar Syncline. Cruziana ichnofacies is found in the upper part of the Sankholi Formation to the lower part of the Koti Dhaman Formation in the NigaliDhar Syncline. The preservational characters here indicate a subtidal environmental condition with poorly sorted, unconsolidated substrate. Depositional condition ranging from moderate to high energy levels below the fair weather base but above the storm wave base under nearshore to foreshore setting in a wave dominated shallow water environment is also indicated. The proximal Cruziana-ichnofacies is interrupted by the Skolithos ichnofacies in the Tal Group of the Nigalidhar Syncline which indicate fluctuating high energy condition which was unfavorable for the opportunistic organism which were dominant during the proximal Cruziana ichnofacies. The excursion of Skolithos ichnofacies (as a pipe rock in the upper part of Sankholi Formation) into the proximal Cruziana ichnofacies in the Tal Group indicate that increased energy and allied parameters attributed to the high rate of sedimentation near the proximal part of the basin. The level bearing the Skolithos ichnofacies in the Nigalidhar Syncline at the juncture of Sankholi and Koti-Dhaman formations can be correlated to the level marked as unconformity in between the Deo-Ka-Tibba and the Dhaulagiri formations by the conglomeratic horizon in the Mussoorie Syncline, Lesser Himalaya, India. Thus, the Tal Group of the Nigalidhar syncline at this stratigraphic level represent slightly deeper water condition than the Mussoorie Syncline, where in the later the aerial exposure dominated which leads to the deposition of conglomeratic horizon and subsequent formation of unconformity. The overall ichnological and sedimentological dataset allow us to infer that the Cambrian successions of Nigalidhar Syncline were deposited in a wave-dominated proximal part of the basin under the foreshore to close to upper shoreface regimes of the shallow marine setting.Keywords: Cambrian, Ichnofacies, Lesser Himalaya, Nigalidhar, Tal Group
Procedia PDF Downloads 258887 Medicinal Plants: An Antiviral Depository with Complex Mode of Action
Authors: Daniel Todorov, Anton Hinkov, Petya Angelova, Kalina Shishkova, Venelin Tsvetkov, Stoyan Shishkov
Abstract:
Human herpes viruses (HHV) are ubiquitous pathogens with a pandemic spread across the globe. HHV type 1 is the main causative agent of cold sores and fever blisters around the mouth and on the face, whereas HHV type 2 is generally responsible for genital herpes outbreaks. The treatment of both viruses is more or less successful with antivirals from the nucleoside analogues group. Their wide application increasingly leads to the emergence of resistant mutants In the past, medicinal plants have been used to treat a number of infectious and non-infectious diseases. Their diversity and ability to produce the vast variety of secondary metabolites according to the characteristics of the environment give them the potential to help us in our warfare with viral infections. The variable chemical characteristics and complex composition is an advantage in the treatment of herpes since the emergence of resistant mutants is significantly complicated. The screening process is difficult due to the lack of standardization. That is why it is especially important to follow the mechanism of antiviral action of plants. On the one hand, it may be expected to interact with its compounds, resulting in enhanced antiviral effects, and the most appropriate environmental conditions can be chosen to maximize the amount of active secondary metabolites. During our study, we followed the activity of various plant extracts on the viral replication cycle as well as their effect on the extracellular virion. We obtained our results following the logical sequence of the experimental settings - determining the cytotoxicity of the extracts, evaluating the overall effect on viral replication and extracellular virion.During our research, we have screened a variety of plant extracts for their antiviral activity against both virus replication and the virion itself. We investigated the effect of the extracts on the individual stages of the viral replication cycle - viral adsorption, penetration and the effect on replication depending on the time of addition. If there are positive results in the later experiments, we had studied the activity over viral adsorption, penetration and the effect of replication according to the time of addition. Our results indicate that some of the extracts from the Lamium album have several targets. The first stages of the viral life cycle are most affected. Several of our active antiviral agents have shown an effect on extracellular virion and adsorption and penetration processes. Our research over the last decade has shown several curative antiviral plants - some of which are from the Lamiacea family. The rich set of active ingredients of the plants in this family makes them a good source of antiviral preparation.Keywords: human herpes virus, antiviral activity, Lamium album, Nepeta nuda
Procedia PDF Downloads 154886 Applying Quadrant Analysis in Identifying Business-to-Business Customer-Driven Improvement Opportunities in Third Party Logistics Industry
Authors: Luay Jum'a
Abstract:
Many challenges are facing third-party logistics (3PL) providers in the domestic and global markets which create a volatile decision making environment. All these challenges such as managing changes in consumer behaviour, demanding expectations from customers and time compressions have turned into complex problems for 3PL providers. Since the movement towards increased outsourcing outpaces movement towards insourcing, the need to achieve a competitive advantage over competitors in 3PL market increases. This trend continues to grow over the years and as a result, areas of strengths and improvements are highlighted through the analysis of the LSQ factors that lead to B2B customers’ satisfaction which become a priority for 3PL companies. Consequently, 3PL companies are increasingly focusing on the most important issues from the perspective of their customers and relying more on this value of information in making their managerial decisions. Therefore, this study is concerned with providing guidance for improving logistics service quality (LSQ) levels in the context of 3PL industry in Jordan. The study focused on the most important factors in LSQ and used a managerial tool that guides 3PL companies in making LSQ improvements based on a quadrant analysis of two main dimensions: LSQ declared importance and LSQ inferred importance. Although, a considerable amount of research has been conducted to investigate the relationship between logistics service quality (LSQ) and customer satisfaction, there remains a lack of developing managerial tools to aid in the process of LSQ improvement decision-making. Moreover, the main advantage for the companies to use 3PL service providers as a trend is due to the realised percentage of cost reduction on the total cost of logistics operations and the incremental improvement in customer service. In this regard, having a managerial tool that help 3PL service providers in managing the LSQ factors portfolio effectively and efficiently would be a great investment for service providers. One way of suggesting LSQ improvement actions for 3PL service providers is via the adoption of analysis tools that perform attribute categorisation such as Importance–Performance matrix. In mind of the above, it can be stated that the use of quadrant analysis will provide a valuable opportunity for 3PL service providers to identify improvement opportunities as customer service attributes or factors importance are identified in two different techniques that complete each other. Moreover, the data were collected through conducting a survey and 293 questionnaires were returned from business-to-business (B2B) customers of 3PL companies in Jordan. The results showed that the LSQ factors vary in their importance and 3PL companies should focus on some LSQ factors more than other factors. Moreover, ordering procedures, timeliness/responsiveness LSQ factors considered being crucial in 3PL businesses and therefore they need to have more focus and development by 3PL service providers in the Jordanian market.Keywords: logistics service quality, managerial decisions, quadrant analysis, third party logistics service provider
Procedia PDF Downloads 127885 Understanding the Common Antibiotic and Heavy Metal Resistant-Bacterial Load in the Textile Industrial Effluents
Authors: Afroza Parvin, Md. Mahmudul Hasan, Md. Rokunozzaman, Papon Debnath
Abstract:
The effluents of textile industries have considerable amounts of heavy metals, causing potential microbial metal loads if discharged into the environment without treatment. Aim: In this present study, both lactose and non-lactose fermenting bacterial isolates were isolated from textile industrial effluents of a specific region of Bangladesh, named Savar, to compare and understand the load of heavy metals in these microorganisms determining the effects of heavy metal resistance properties on antibiotic resistance. Methods: Five different textile industrial canals of Savar were selected, and effluent samples were collected in 2016 between June to August. Total bacterial colony (TBC) was counted for day 1 to day 5 for 10-6 dilution of samples to 10-10 dilution. All the isolates were isolated and selected using 4 differential media, and tested for the determination of minimum inhibitory concentration (MIC) of heavy metals and antibiotic susceptibility test with plate assay method and modified Kirby-Bauer disc diffusion method, respectively. To detect the combined effect of heavy metals and antibiotics, a binary exposure experiment was performed, and to understand the plasmid profiling plasmid DNA was extracted by alkaline lysis method of some selective isolates. Results: Most of the cases, the colony forming units (CFU) per plate for 50 ul diluted sample were uncountable at 10-6 dilution, however, countable for 10-10 dilution and it didn’t vary much from canal to canal. A total of 50 Shigella, 50 Salmonella, and 100 E.coli (Escherichia coli) like bacterial isolates were selected for this study where the MIC was less than or equal to 0.6 mM for 100% Shigella and Salmonella like isolates, however, only 3% E. coli like isolates had the same MIC for nickel (Ni). The MIC for chromium (Cr) was less than or equal to 2.0 mM for 16% Shigella, 20% Salmonella, and 17% E. coli like isolates. Around 60% of both Shigella and Salmonella, but only 20% of E.coli like isolates had a MIC of less than or equal to 1.2 mM for lead (Pb). The most prevalent resistant pattern for azithromycin (AZM) for Shigella and Salmonella like isolates was found 38% and 48%, respectively; however, for E.coli like isolates, the highest pattern (36%) was found for sulfamethoxazole-trimethoprim (SXT). In the binary exposure experiment, antibiotic zone of inhibition was mostly increased in the presence of heavy metals for all types of isolates. The highest sized plasmid was found 21 Kb and 14 Kb for lactose and non-lactose fermenting isolates, respectively. Conclusion: Microbial resistance to antibiotics and metal ions, has potential health hazards because these traits are generally associated with transmissible plasmids. Microorganisms resistant to antibiotics and tolerant to metals appear as a result of exposure to metal-contaminated environments.Keywords: antibiotics, effluents, heavy metals, minimum inhibitory concentration, resistance
Procedia PDF Downloads 315884 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 147883 The International Fight against the Financing of Terrorism: Analysis of the Anti-Money Laundering and Combating Financing of Terrorism Regime
Authors: Loukou Amoin Marie Djedri
Abstract:
Financing is important for all terrorists – from the largest organizations in control of territories, to the smallest groups – not only for spreading fear through attacks, but also to finance the expansion of terrorist dogmas. These organizations pose serious threats to the international community. The disruption of terrorist financing aims to create a hostile environment for the growth of terrorism and to limit considerably the terrorist groups capacities. The World Bank (WB), together with the International Monetary Fund (IMF), decided to include in their scope the Fight against the money laundering and the financing of terrorism, in order to assist Member States in protecting their internal financial system from terrorism use and abuse and reinforcing their legal system. To do so, they have adopted the Anti-Money Laundering /Combating Financing of Terrorism (AML/CFT) standards that have been set up by the Financial Action Task Force. This set of standards, recognized as the international standards for anti-money laundering and combating the financing of terrorism, has to be implemented by States Members in order to strengthen their judicial system and relevant national institutions. However, we noted that, to date, some States Members still have significant AML/CFT deficiencies, which can constitute serious threats not only to the country’s economic stability but also for the global financial system. In addition, studies stressed out that repressive measures are more implemented by countries than preventive measures, which could be an important weakness in a state security system. Furthermore, we noticed that the AML/CFT standards evolve slowly, while techniques used by terrorist networks keep developing. The goal of the study is to show how to enhance the AML/CFT global compliance through the work of the IMF and the WB, to help member states to consolidate their financial system. To encourage and ensure the effectiveness of these standards, a methodology for assessing the compliance with the AML/CFT standards has been created to follow up the concrete implementation of these standards and to provide accurate technical assistance to countries in need. A risk-based approach has also been adopted as a key component of the implementation of the AML/CFT Standards, with the aim of strengthening the efficiency of the standards. Instead, we noted that the assessment is not efficient in the process of enhancing AML/CFT measures because it seems to lack of adaptation to the country situation. In other words, internal and external factors are not enough taken into account in a country assessment program. The purpose of this paper is to analyze the AML/CFT regime in the fight against the financing of terrorism and to find lasting solutions to achieve the global AML/CFT compliance. The work of all the organizations involved in this combat is imperative to protect the financial network and to lead to the disintegration of terrorist groups in the future.Keywords: AML/CFT standards, financing of terrorism, international financial institutions, risk-based approach
Procedia PDF Downloads 275882 Human Interaction Skills and Employability in Courses with Internships: Report of a Decade of Success in Information Technology
Authors: Filomena Lopes, Miguel Magalhaes, Carla Santos Pereira, Natercia Durao, Cristina Costa-Lobo
Abstract:
The option to implement curricular internships with undergraduate students is a pedagogical option with some good results perceived by academic staff, employers, and among graduates in general and IT (Information Technology) in particular. Knowing that this type of exercise has never been so relevant, as one tries to give meaning to the future in a landscape of rapid and deep changes. We have as an example the potential disruptive impact on the jobs of advances in robotics, artificial intelligence and 3-D printing, which is a focus of fierce debate. It is in this context that more and more students and employers engage in the pursuit of career-promoting responses and business development, making their investment decisions of training and hiring. Three decades of experience and research in computer science degree and in information systems technologies degree at the Portucalense University, Portuguese private university, has provided strong evidence of its advantages. The Human Interaction Skills development as well as the attractiveness of such experiences for students are topics assumed as core in the Ccnception and management of the activities implemented in these study cycles. The objective of this paper is to gather evidence of the Human Interaction Skills explained and valued within the curriculum internship experiences of IT students employability. Data collection was based on the application of questionnaire to intern counselors and to students who have completed internships in these undergraduate courses in the last decade. The trainee supervisor, responsible for monitoring the performance of IT students in the evolution of traineeship activities, evaluates the following Human Interaction Skills: Motivation and interest in the activities developed, interpersonal relationship, cooperation in company activities, assiduity, ease of knowledge apprehension, Compliance with norms, insertion in the work environment, productivity, initiative, ability to take responsibility, creativity in proposing solutions, and self-confidence. The results show that these undergraduate courses promote the development of Human Interaction Skills and that these students, once they finish their degree, are able to initiate remunerated work functions, mainly by invitation of the institutions in which they perform curricular internships. Findings obtained from the present study contribute to widen the analysis of its effectiveness in terms of future research and actions in regard to the transition from Higher Education pathways to the Labour Market.Keywords: human interaction skills, employability, internships, information technology, higher education
Procedia PDF Downloads 289