Search results for: Spin Generated Forces
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4713

Search results for: Spin Generated Forces

213 Accelerating Personalization Using Digital Tools to Drive Circular Fashion

Authors: Shamini Dhana, G. Subrahmanya VRK Rao

Abstract:

The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.

Keywords: circular fashion, deep learning, digital technology platform, personalization

Procedia PDF Downloads 34
212 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam

Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck

Abstract:

The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.

Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam

Procedia PDF Downloads 220
211 Impact of Climate Change on Irrigation and Hydropower Potential: A Case of Upper Blue Nile Basin in Western Ethiopia

Authors: Elias Jemal Abdella

Abstract:

The Blue Nile River is an important shared resource of Ethiopia, Sudan and also, because it is the major contributor of water to the main Nile River, Egypt. Despite the potential benefits of regional cooperation and integrated joint basin management, all three countries continue to pursue unilateral plans for development. Besides, there is great uncertainty about the likely impacts of climate change in water availability for existing as well as proposed irrigation and hydropower projects in the Blue Nile Basin. The main objective of this study is to quantitatively assess the impact of climate change on the hydrological regime of the upper Blue Nile basin, western Ethiopia. Three models were combined, a dynamic Coordinated Regional Climate Downscaling Experiment (CORDEX) regional climate model (RCM) that is used to determine climate projections for the Upper Blue Nile basin for Representative Concentration Pathways (RCPs) 4.5 and 8.5 greenhouse gas emissions scenarios for the period 2021-2050. The outputs generated from multimodel ensemble of four (4) CORDEX-RCMs (i.e., rainfall and temperature) were used as input to a Soil and Water Assessment Tool (SWAT) hydrological model which was setup, calibrated and validated with observed climate and hydrological data. The outputs from the SWAT model (i.e., projections in river flow) were used as input to a Water Evaluation and Planning (WEAP) water resources model which was used to determine the water resources implications of the changes in climate. The WEAP model was set-up to simulate three development scenarios. Current Development scenario was the existing water resource development situation, Medium-term Development scenario was planned water resource development that is expected to be commissioned (i.e. before 2025) and Long-term full Development scenario were all planned water resource development likely to be commissioned (i.e. before 2050). The projected change of mean annual temperature for period (2021 – 2050) in most of the basin are warmer than the baseline (1982 -2005) average in the range of 1 to 1.4oC, implying that an increase in evapotranspiration loss. Subbasins which already distressed from drought may endure to face even greater challenges in the future. Projected mean annual precipitation varies from subbasin to subbasin; in the Eastern, North Eastern and South western highland of the basin a likely increase of mean annual precipitation up to 7% whereas in the western lowland part of the basin mean annual precipitation projected to decrease by 3%. The water use simulation indicates that currently irrigation demand in the basin is 1.29 Bm3y-1 for 122,765 ha of irrigation area. By 2025, with new schemes being developed, irrigation demand is estimated to increase to 2.5 Bm3y-1 for 277,779 ha. By 2050, irrigation demand in the basin is estimated to increase to 3.4 Bm3y-1 for 372,779 ha. The hydropower generation simulation indicates that 98 % of hydroelectricity potential could be produced if all planned dams are constructed.

Keywords: Blue Nile River, climate change, hydropower, SWAT, WEAP

Procedia PDF Downloads 319
210 Accessible Facilities in Home Environment for Elderly Family Members in Sri Lanka

Authors: M. A. N. Rasanjalee Perera

Abstract:

The world is facing several problems due to increasing elderly population. In Sri Lanka, along with the complexity of the modern society and structural and functional changes of the family, “caring for elders” seems as an emerging social problem. This situation may intensify as the county is moving into a middle income society. Seeking higher education and related career opportunities, and urban living in modern housing are new trends, through which several problems are generated. Among many issues related with elders, “lack of accessible and appropriate facilities in their houses as well as public buildings” can be identified as a major problem. This study argues that welfare facilities provided for the elderly people, particularly in the home environment, in the country are not adequate. Modern housing features such as bathrooms, pantries, lobbies, and leisure areas etc. are questionable as to whether they match with elders’ physical and mental needs. Consequently, elders have to face domestic accidents and many other difficulties within their living environments. Records of hospitals in the country also proved this fact. Therefore, this study tries to identify how far modern houses are suited with elders’ needs. The study further questioned whether “aging” is a considerable matter when people are buying, planning and renovating houses. A randomly selected sample of 50 houses were observed and 50 persons were interviewed around the Maharagama urban area in Colombo district to obtain primary data, while relevant secondary data and information were used to have a depth analysis. The study clearly found that none of the houses included to the sample are considering elders’ needs in planning, renovating, or arranging the home. Instead, most of the families were giving priority to the rich and elegant appearance and modern facilities of the houses. Particularly, to the bathrooms, pantry, large setting areas, balcony, parking slots for two vehicles, ad parapet walls with roller-gates are the main concerns. A significant factor found here is that even though, many children of the aged are in middle age and reaching their older years at present, they do not plan their future living within a safe and comfortable home, despite that they are hoping to spent the latter part of their lives in the their current homes. This fact highlights that not only the other responsible parts of the society, but also those who are reaching their older ages are ignoring the problems of the aged. At the same time, it was found that more than 80% of old parents do not like to stay at their children’s homes as the living environments in such modern homes are not familiar or convenient for them. Due to this context, the aged in Sri Lanka may have to be alone in their own homes due to current trend of society of migrating to urban living in modern houses. At the same time, current urban families who live in modern houses may have to face adding accessible facilities in their home environment, as current modern housing facilities may not be appropriate them for a better life in their latter part of life.

Keywords: aging population, elderly care, home environment, housing facilities

Procedia PDF Downloads 104
209 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 56
208 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet

Authors: Zeradam Yeshiwas, A. Krishnaia

Abstract:

The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).

Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior

Procedia PDF Downloads 40
207 Safety Considerations of Furanics for Sustainable Applications in Advanced Biorefineries

Authors: Anitha Muralidhara, Victor Engelen, Christophe Len, Pascal Pandard, Guy Marlair

Abstract:

Production of bio-based chemicals and materials from lignocellulosic biomass is gaining tremendous importance in advanced bio-refineries while aiming towards progressive replacement of petroleum based chemicals in transportation fuels and commodity polymers. One such attempt has resulted in the production of key furan derivatives (FD) such as furfural, HMF, MMF etc., via acid catalyzed dehydration (ACD) of C6 and C5 sugars, which are further converted into key chemicals or intermediates (such as Furandicarboxylic acid, Furfuryl alcohol etc.,). In subsequent processes, many high potential FD are produced, that can be converted into high added value polymers or high energy density biofuels. During ACD, an unavoidable polyfuranic byproduct is generated which is called humins. The family of FD is very large with varying chemical structures and diverse physicochemical properties. Accordingly, the associated risk profiles may largely vary. Hazardous Material (Haz-mat) classification systems such as GHS (CLP in the EU) and the UN TDG Model Regulations for transport of dangerous goods are one of the preliminary requirements for all chemicals for their appropriate classification, labelling, packaging, safe storage, and transportation. Considering the growing application routes of FD, it becomes important to notice the limited access to safety related information (safety data sheets available only for famous compounds such as HMF, furfural etc.,) in these internationally recognized haz-mat classification systems. However, these classifications do not necessarily provide information about the extent of risk involved when the chemical is used in any specific application. Factors such as thermal stability, speed of combustion, chemical incompatibilities, etc., can equally influence the safety profile of a compound, that are clearly out of the scope of any haz-mat classification system. Irrespective of the bio-based origin, FD has so far received inconsistent remarks concerning their toxicity profiles. With such inconsistencies, there is a fear that, a large family of FD may also follow extreme judgmental scenarios like ionic liquids, by ranking some compounds as extremely thermally stable, non-flammable, etc., Unless clarified, these messages could lead to misleading judgements while ranking the chemical based on its hazard rating. Safety is a key aspect in any sustainable biorefinery operation/facility, which is often underscored or neglected. To fill up these existing data gaps and to address ambiguities and discrepancies, the current study focuses on giving preliminary insights on safety assessment of FD and their potential targeted by-products. With the available information in the literature and obtained experimental results, physicochemical safety, environmental safety as well as (a scenario based) fire safety profiles of key FD, as well as side streams such as humins and levulinic acid, will be considered. With this, the study focuses on defining patterns and trends that gives coherent safety related information for existing and newly synthesized FD in the market for better functionality and sustainable applications.

Keywords: furanics, humins, safety, thermal and fire hazard, toxicity

Procedia PDF Downloads 148
206 Caged Compounds as Light-Dependent Initiators for Enzyme Catalysis Reactions

Authors: Emma Castiglioni, Nigel Scrutton, Derren Heyes, Alistair Fielding

Abstract:

By using light as trigger, it is possible to study many biological processes, such as the activity of genes, proteins, and other molecules, with precise spatiotemporal control. Caged compounds, where biologically active molecules are generated from an inert precursor upon laser photolysis, offer the potential to initiate such biological reactions with high temporal resolution. As light acts as the trigger for cleaving the protecting group, the ‘caging’ technique provides a number of advantages as it can be intracellular, rapid and controlled in a quantitative manner. We are developing caging strategies to study the catalytic cycle of a number of enzyme systems, such as nitric oxide synthase and ethanolamine ammonia lyase. These include the use of caged substrates, caged electrons and the possibility of caging the enzyme itself. In addition, we are developing a novel freeze-quench instrument to study these reactions, which combines rapid mixing and flashing capabilities. Reaction intermediates will be trapped at low temperatures and will be analysed by using electron paramagnetic resonance (EPR) spectroscopy to identify the involvement of any radical species during catalysis. EPR techniques typically require relatively long measurement times and very often, low temperatures to fully characterise these short-lived species. Therefore, common rapid mixing techniques, such as stopped-flow or quench-flow are not directly suitable. However, the combination of rapid freeze-quench (RFQ) followed by EPR analysis provides the ideal approach to kinetically trap and spectroscopically characterise these transient radical species. In a typical RFQ experiment, two reagent solutions are delivered to the mixer via two syringes driven by a pneumatic actuator or stepper motor. The new mixed solution is then sprayed into a cryogenic liquid or surface, and the frozen sample is then collected and packed into an EPR tube for analysis. The earliest RFQ instrument consisted of a hydraulic ram unit as a drive unit with direct spraying of the sample into a cryogenic liquid (nitrogen, isopentane or petroleum). Improvements to the RFQ technique have arisen from the design of new mixers in order to reduce both the volume and the mixing time. In addition, the cryogenic isopentane bath has been coupled to a filtering system or replaced by spraying the solution onto a surface that is frozen via thermal conductivity with a cryogenic liquid. In our work, we are developing a novel RFQ instrument which combines the freeze-quench technology with flashing capabilities to enable the studies of both thermally-activated and light-activated biological reactions. This instrument also uses a new rotating plate design based on magnetic couplings and removes the need for mechanical motorised rotation, which can otherwise be problematic at cryogenic temperatures.

Keywords: caged compounds, freeze-quench apparatus, photolysis, radicals

Procedia PDF Downloads 189
205 Purpose-Driven Collaborative Strategic Learning

Authors: Mingyan Hong, Shuozhao Hou

Abstract:

Collaborative Strategic Learning (CSL) teaches students to use learning strategies while working cooperatively. Student strategies include the following steps: defining the learning task and purpose; conducting ongoing negotiation of the learning materials by deciding "click" (I get it and I can teach it – green card, I get it –yellow card) or "clunk" (I don't get it – red card) at the end of each learning unit; "getting the gist" of the most important parts of the learning materials; and "wrapping up" key ideas. Find out how to help students of mixed achievement levels apply learning strategies while learning content area in materials in small groups. The design of CSL is based on social-constructivism and Vygotsky’s best-known concept of the Zone of Proximal Development (ZPD). The definition of ZPD is the distance between the actual acquisition level as decided by individual problem solution case and the level of potential acquisition level, similar to Krashen (1980)’s i+1, as decided through the problem-solution case under the facilitator’s guidance, or in group work with other more capable members (Vygotsky, 1978). Vygotsky claimed that learners’ ideal learning environment is in the ZPD. An ideal teacher or more-knowledgable-other (MKO) should be able to recognize a learner’s ZPD and facilitates them to develop beyond it. Then the MKO is able to leave the support step by step until the learner can perform the task without aid. Steven Krashen (1980) proposed Input hypothesis including i+1 hypothesis. The input hypothesis models are the application of ZPD in second language acquisition and have been widely recognized until today. Krashen (2019)’s optimal language learning environment (2019) further developed the application of ZPD and added the component of strategic group learning. The strategic group learning is composed of desirable learning materials learners are motivated to learn and desirable group members who are more capable and are therefore able to offer meaningful input to the learners. Purpose-driven Collaborative Strategic Learning Model is a strategic integration of ZPD, i+1 hypothesis model, and Optimal Language Learning Environment Model. It is purpose driven to ensure group members are motivated. It is collaborative so that an optimal learning environment where meaningful input from meaningful conversation can be generated. It is strategic because facilitators in the model strategically assign each member a meaningful and collaborative role, e.g., team leader, technician, problem solver, appraiser, offer group learning instrument so that the learning process is structured, and integrate group learning and team building making sure holistic development of each participant. Using data collected from college year one and year two students’ English courses, this presentation will demonstrate how purpose-driven collaborative strategic learning model is implemented in the second/foreign language classroom, using the qualitative data from questionnaire and interview. Particular, this presentation will show how second/foreign language learners grow from functioning with facilitator or more capable peer’s aid to performing without aid. The implication of this research is that purpose-driven collaborative strategic learning model can be used not only in language learning, but also in any subject area.

Keywords: collaborative, strategic, optimal input, second language acquisition

Procedia PDF Downloads 105
204 Augmented Reality Enhanced Order Picking: The Potential for Gamification

Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis

Abstract:

Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.

Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification

Procedia PDF Downloads 120
203 Metal-Semiconductor Transition in Ultra-Thin Titanium Oxynitride Films Deposited by ALD

Authors: Farzan Gity, Lida Ansari, Ian M. Povey, Roger E. Nagle, James C. Greer

Abstract:

Titanium nitride (TiN) films have been widely used in variety of fields, due to its unique electrical, chemical, physical and mechanical properties, including low electrical resistivity, chemical stability, and high thermal conductivity. In microelectronic devices, thin continuous TiN films are commonly used as diffusion barrier and metal gate material. However, as the film thickness decreases below a few nanometers, electrical properties of the film alter considerably. In this study, the physical and electrical characteristics of 1.5nm to 22nm thin films deposited by Plasma-Enhanced Atomic Layer Deposition (PE-ALD) using Tetrakis(dimethylamino)titanium(IV), (TDMAT) chemistry and Ar/N2 plasma on 80nm SiO2 capped in-situ by 2nm Al2O3 are investigated. ALD technique allows uniformly-thick films at monolayer level in a highly controlled manner. The chemistry incorporates low level of oxygen into the TiN films forming titanium oxynitride (TiON). Thickness of the films is characterized by Transmission Electron Microscopy (TEM) which confirms the uniformity of the films. Surface morphology of the films is investigated by Atomic Force Microscopy (AFM) indicating sub-nanometer surface roughness. Hall measurements are performed to determine the parameters such as carrier mobility, type and concentration, as well as resistivity. The >5nm-thick films exhibit metallic behavior; however, we have observed that thin film resistivity is modulated significantly by film thickness such that there are more than 5 orders of magnitude increment in the sheet resistance at room temperature when comparing 5nm and 1.5nm films. Scattering effects at interfaces and grain boundaries could play a role in thickness-dependent resistivity in addition to quantum confinement effect that could occur at ultra-thin films: based on our measurements the carrier concentration is decreased from 1.5E22 1/cm3 to 5.5E17 1/cm3, while the mobility is increased from < 0.1 cm2/V.s to ~4 cm2/V.s for the 5nm and 1.5nm films, respectively. Also, measurements at different temperatures indicate that the resistivity is relatively constant for the 5nm film, while for the 1.5nm film more than 2 orders of magnitude reduction has been observed over the range of 220K to 400K. The activation energy of the 2.5nm and 1.5nm films is 30meV and 125meV, respectively, indicating that the TiON ultra-thin films are exhibiting semiconducting behaviour attributing this effect to a metal-semiconductor transition. By the same token, the contact is no longer Ohmic for the thinnest film (i.e., 1.5nm-thick film); hence, a modified lift-off process was developed to selectively deposit thicker films allowing us to perform electrical measurements with low contact resistance on the raised contact regions. Our atomic scale simulations based on molecular dynamic-generated amorphous TiON structures with low oxygen content confirm our experimental observations indicating highly n-type thin films.

Keywords: activation energy, ALD, metal-semiconductor transition, resistivity, titanium oxynitride, ultra-thin film

Procedia PDF Downloads 266
202 Fuels and Platform Chemicals Production from Lignocellulosic Biomass: Current Status and Future Prospects

Authors: Chandan Kundu, Sankar Bhattacharya

Abstract:

A significant disadvantage of fossil fuel energy production is the considerable amount of carbon dioxide (CO₂) released, which is one of the contributors to climate change. Apart from environmental concerns, changing fossil fuel prices have pushed society gradually towards renewable energy sources in recent years. Biomass is a plentiful and renewable resource and a source of carbon. Recent years have seen increased research interest in generating fuels and chemicals from biomass. Unlike fossil-based resources, biomass is composed of lignocellulosic material, which does not contribute to the increase in atmospheric CO₂ over a longer term. These considerations contribute to the current move of the chemical industry from non-renewable feedstock to renewable biomass. This presentation focuses on generating bio-oil and two major platform chemicals that can potentially improve the environment. Thermochemical processes such as pyrolysis are considered viable methods for producing bio-oil and biomass-based platform chemicals. Fluidized bed reactors, on the other hand, are known to boost bio-oil yields during pyrolysis due to their superior mixing and heat transfer features, as well as their scalability. This review and the associated experimental work are focused on the thermochemical conversion of biomass to bio-oil and two high-value platform chemicals, Levoglucosenone (LGO) and 5-Chloromethyl furfural (5-CMF), in a fluidized bed reactor. These two active molecules with distinct features can potentially be useful monomers in the chemical and pharmaceutical industries since they are well adapted to the manufacture of biologically active products. This process took several meticulous steps. To begin, the biomass was delignified using a peracetic acid pretreatment to remove lignin. Because of its complicated structure, biomass must be pretreated to remove the lignin, increasing access to the carbohydrate components and converting them to platform chemicals. The biomass was then characterized by Thermogravimetric analysis, Synchrotron-based THz spectroscopy, and in-situ DRIFTS in the laboratory. Based on the results, a continuous-feeding fluidized bed reactor system was constructed to generate platform chemicals from pretreated biomass using hydrogen chloride acid-gas as a catalyst. The procedure also yields biochar, which has a number of potential applications, including soil remediation, wastewater treatment, electrode production, and energy resource utilization. Consequently, this research also includes a preliminary experimental evaluation of the biochar's prospective applications. The biochar obtained was evaluated for its CO₂ and steam reactivity. The outline of the presentation will comprise the following: Biomass pretreatment for effective delignification Mechanistic study of the thermal and thermochemical conversion of biomass Thermochemical conversion of untreated and pretreated biomass in the presence of an acid catalyst to produce LGO and CMF A thermo-catalytic process for the production of LGO and 5-CMF in a continuously-fed fluidized bed reactor and efficient separation of chemicals Use of biochar generated from the platform chemicals production through gasification

Keywords: biomass, pretreatment, pyrolysis, levoglucosenone

Procedia PDF Downloads 99
201 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports

Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer

Abstract:

Preventing piping fatigue flow induced vibration in the Oil & Gas sector demands not only the constant development of engineering design methodologies based on available software packages, but also special piping support technologies for designing safe and reliable piping systems. The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically intrinsically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se, and the pipework associated steel support structure. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system in order to perform the required piping modification. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the type of process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces, the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap), and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it was possible to resolve the piping vibration problem throughout redesigning adequately the existing static piping supports and by adding new viscous dampers supports. This was conducted on-stream at the oil crude pipeline in question without the necessity of reducing the production of the plant. Concluding that the application of the methodology of this paper can be applied to solve similar cases in a straightforward manner.

Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper

Procedia PDF Downloads 99
200 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System

Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold

Abstract:

In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.

Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber

Procedia PDF Downloads 118
199 Navigating Complex Communication Dynamics in Qualitative Research

Authors: Kimberly M. Cacciato, Steven J. Singer, Allison R. Shapiro, Julianna F. Kamenakis

Abstract:

This study examines the dynamics of communication among researchers and participants who have various levels of hearing, use multiple languages, have various disabilities, and who come from different social strata. This qualitative methodological study focuses on the strategies employed in an ethnographic research study examining the communication choices of six sets of parents who have Deaf-Disabled children. The participating families varied in their communication strategies and preferences including the use of American Sign Language (ASL), visual-gestural communication, multiple spoken languages, and pidgin forms of each of these. The research team consisted of two undergraduate students proficient in ASL and a Deaf principal investigator (PI) who uses ASL and speech as his main modes of communication. A third Hard-of-Hearing undergraduate student fluent in ASL served as an objective facilitator of the data analysis. The team created reflexive journals by audio recording, free writing, and responding to team-generated prompts. They discussed interactions between the members of the research team, their evolving relationships, and various social and linguistic power differentials. The researchers reflected on communication during data collection, their experiences with one another, and their experiences with the participating families. Reflexive journals totaled over 150 pages. The outside research assistant reviewed the journals and developed follow up open-ended questions and prods to further enrich the data. The PI and outside research assistant used NVivo qualitative research software to conduct open inductive coding of the data. They chunked the data individually into broad categories through multiple readings and recognized recurring concepts. They compared their categories, discussed them, and decided which they would develop. The researchers continued to read, reduce, and define the categories until they were able to develop themes from the data. The research team found that the various communication backgrounds and skills present greatly influenced the dynamics between the members of the research team and with the participants of the study. Specifically, the following themes emerged: (1) students as communication facilitators and interpreters as barriers to natural interaction, (2) varied language use simultaneously complicated and enriched data collection, and (3) ASL proficiency and professional position resulted in a social hierarchy among researchers and participants. In the discussion, the researchers reflected on their backgrounds and internal biases of analyzing the data found and how social norms or expectations affected the perceptions of the researchers in writing their journals. Through this study, the research team found that communication and language skills require significant consideration when working with multiple and complex communication modes. The researchers had to continually assess and adjust their data collection methods to meet the communication needs of the team members and participants. In doing so, the researchers aimed to create an accessible research setting that yielded rich data but learned that this often required compromises from one or more of the research constituents.

Keywords: American Sign Language, complex communication, deaf-disabled, methodology

Procedia PDF Downloads 91
198 Energy Efficiency of Secondary Refrigeration with Phase Change Materials and Impact on Greenhouse Gases Emissions

Authors: Michel Pons, Anthony Delahaye, Laurence Fournaison

Abstract:

Secondary refrigeration consists of splitting large-size direct-cooling units into volume-limited primary cooling units complemented by secondary loops for transporting and distributing cold. Such a design reduces the refrigerant leaks, which represents a source of greenhouse gases emitted into the atmosphere. However, inserting the secondary circuit between the primary unit and the ‘users’ heat exchangers (UHX) increases the energy consumption of the whole process, which induces an indirect emission of greenhouse gases. It is thus important to check whether that efficiency loss is sufficiently limited for the change to be globally beneficial to the environment. Among the likely secondary fluids, phase change slurries offer several advantages: they transport latent heat, they stabilize the heat exchange temperature, and the formerly evaporators still can be used as UHX. The temperature level can also be adapted to the desired cooling application. Herein, the slurry {ice in mono-propylene-glycol solution} (melting temperature Tₘ of 6°C) is considered for food preservation, and the slurry {mixed hydrate of CO₂ + tetra-n-butyl-phosphonium-bromide in aqueous solution of this salt + CO₂} (melting temperature Tₘ of 13°C) is considered for air conditioning. For the sake of thermodynamic consistency, the analysis encompasses the whole process, primary cooling unit plus secondary slurry loop, and the various properties of the slurries, including their non-Newtonian viscosity. The design of the whole process is optimized according to the properties of the chosen slurry and under explicit constraints. As a first constraint, all the units must deliver the same cooling power to the user. The other constraints concern the heat exchanges areas, which are prescribed, and the flow conditions, which prevent deposition of the solid particles transported in the slurry, and their agglomeration. Minimization of the total energy consumption leads to the optimal design. In addition, the results are analyzed in terms of exergy losses, which allows highlighting the couplings between the primary unit and the secondary loop. One important difference between the ice-slurry and the mixed-hydrate one is the presence of gaseous carbon dioxide in the latter case. When the mixed-hydrate crystals melt in the UHX, CO₂ vapor is generated at a rate that depends on the phase change kinetics. The flow in the UHX, and its heat and mass transfer properties are significantly modified. This effect has never been investigated before. Lastly, inserting the secondary loop between the primary unit and the users increases the temperature difference between the refrigerated space and the evaporator. This results in a loss of global energy efficiency, and therefore in an increased energy consumption. The analysis shows that this loss of efficiency is not critical in the first case (Tₘ = 6°C), while the second case leads to more ambiguous results, partially because of the higher melting temperature.The consequences in terms of greenhouse gases emissions are also analyzed.

Keywords: exergy, hydrates, optimization, phase change material, thermodynamics

Procedia PDF Downloads 109
197 Effect of Noise at Different Frequencies on Heart Rate Variability - Experimental Study Protocol

Authors: A. Bortkiewcz, A. Dudarewicz, P. Małecki, M. Kłaczyński, T. Wszołek, Małgorzata Pawlaczyk-Łuszczyńska

Abstract:

Low-frequency noise (LFN) has been recognized as a special environmental pollutant. It is usually considered a broadband noise with the dominant content of low frequencies from 10 Hz to 250 Hz. A growing body of data shows that LFN differs in nature from other environmental noises, which are at comparable levels but not dominated by low-frequency components. The primary and most frequent adverse effect of LFN exposure is annoyance. Moreover, some recent investigations showed that LFN at relatively low A-weighted sound pressure levels (40−45 dB) occurring in office-like areas could adversely affect the mental performance, especially of high-sensitive subjects. It is well documented that high-frequency noise disturbs various types of human functions; however, there is very little data on the impact of LFN on well-being and health, including the cardiovascular system. Heart rate variability (HRV) is a sensitive marker of autonomic regulation of the circulatory system. Walker and co-workers found that LFN has a significantly more negative impact on cardiovascular response than exposure to high-frequency noise and that changes in HRV parameters resulting from LFN exposure tend to persist over time. The negative reactions of the cardiovascular system in response to LFN generated by wind turbines (20-200 Hz) were confirmed by Chiu. The scientific aim of the study is to assess the relationship between the spectral-temporal characteristics of LFN and the activity of the autonomic nervous system, considering the subjective assessment of annoyance, sensitivity to this type of noise, and cognitive and general health status. The study will be conducted in 20 male students in a special, acoustically prepared, constantly supervised room. Each person will be tested 4 times (4 sessions), under conditions of non-exposure (sham) and exposure to noise of wind turbines recorded at a distance of 250 meters from the turbine with different frequencies and frequency ranges: acoustic band 20 Hz-20 kHz, infrasound band 5-20 Hz, acoustic band + infrasound band. The order of sessions of the experiment will be randomly selected. Each session will last 1 h. There will be a 2-3 days break between sessions to exclude the possibility of the earlier session influencing the results of the next one. Before the first exposure, a questionnaire will be conducted on noise sensitivity, general health status using the GHQ questionnaire, hearing organ status and sociodemographic data. Before each of the 4 exposures, subjects will complete a brief questionnaire on their mood and sleep quality the night before the test. After the test, the subjects will be asked about any discomfort and subjective symptoms during the exposure. Before the test begins, Holter ECG monitoring equipment will be installed. HRV will be analyzed from the ECG recordings, including time and frequency domain parameters. The tests will always be performed in the morning (9-12) to avoid the influence of diurnal rhythm on HRV results. Students will perform psychological tests 15 minutes before the end of the test (Vienna Test System).

Keywords: neurovegetative control, heart rate variability (HRV), cognitive processes, low frequency noise

Procedia PDF Downloads 51
196 Comparative Evaluation of High Pure Mn3O4 Preparation Technique between the Conventional Process from Electrolytic Manganese and a Sustainable Approach Directly from Low-Grade Rhodochrosite

Authors: Fang Lian, Zefang Chenli, Laijun Ma, Lei Mao

Abstract:

Up to now, electrolytic process is a popular way to prepare Mn and MnO2 (EMD) with high purity. However, the conventional preparation process of manganese oxide such as Mn3O4 with high purity from electrolytic manganese metal is characterized by long production-cycle, high-pollution discharge and high energy consumption especially initially from low-grade rhodochrosite, the main resources for exploitation and applications in China. Moreover, Mn3O4 prepared from electrolytic manganese shows large particles, single morphology beyond the control and weak chemical activity. On the other hand, hydrometallurgical method combined with thermal decomposition, hydrothermal synthesis and sol-gel processes has been widely studied because of its high efficiency, low consumption and low cost. But the key problem in direct preparation of manganese oxide series from low-grade rhodochrosite is to remove completely the multiple impurities such as iron, silicon, calcium and magnesium. It is urgent to develop a sustainable approach to high pure manganese oxide series with character of short process, high efficiency, environmentally friendly and economical benefit. In our work, the preparation technique of high pure Mn3O4 directly from low-grade rhodochrosite ore (13.86%) was studied and improved intensively, including the effective leaching process and the short purifying process. Based on the same ion effect, the repeated leaching of rhodochrosite with sulfuric acid is proposed to improve the solubility of Mn2+ and inhibit the dissolution of the impurities Ca2+ and Mg2+. Moreover, the repeated leaching process could make full use of sulfuric acid and lower the cost of the raw material. With the aid of theoretical calculation, Ba(OH)2 was chosen to adjust the pH value of manganese sulfate solution and BaF2 to remove Ca2+ and Mg2+ completely in the process of purifying. Herein, the recovery ratio of manganese and removal ratio of the impurity were evaluated via chemical titration and ICP analysis, respectively. Comparison between conventional preparation technique from electrolytic manganese and a sustainable approach directly from low-grade rhodochrosite have also been done herein. The results demonstrate that the extraction ratio and the recovery ratio of manganese reached 94.3% and 92.7%, respectively. The heavy metal impurities has been decreased to less than 1ppm, and the content of calcium, magnesium and sodium has been decreased to less than 20ppm, which meet standards of high pure reagent for energy and electronic materials. In compare with conventional technique from electrolytic manganese, the power consumption has been reduced to ≤2000 kWh/t(product) in our short-process approach. Moreover, comprehensive recovery rate of manganese increases significantly, and the wastewater generated from our short-process approach contains low content of ammonia/ nitrogen about 500 mg/t(product) and no toxic emissions. Our study contributes to the sustainable application of low-grade manganese ore. Acknowledgements: The authors are grateful to the National Science and Technology Support Program of China (No.2015BAB01B02) for financial support to the work.

Keywords: leaching, high purity, low-grade rhodochrosite, manganese oxide, purifying process, recovery ratio

Procedia PDF Downloads 214
195 Ruminal Fermentation of Biologically Active Nitrate- and Nitro-Containing Forages

Authors: Robin Anderson, David Nisbet

Abstract:

Nitrate, 3-nitro-1-propionic acid (NPA) and 3-nitro-1-propanol (NPOH) are biologically active chemicals that can accumulate naturally in rangeland grasses forages consumed by grazing cattle, sheep and goats. While toxic to livestock if accumulations and amounts consumed are high enough, particularly in animals having no recent exposure to the forages, these chemicals are known to be potent inhibitors of methane-producing bacteria inhabiting the rumen. Consequently, there is interest in examining their potential use as anti-methanogenic compounds to decrease methane emissions by grazing ruminants. Presently, rumen microbes, collected freshly from a cannulated Holstein cow maintained on 50:50 corn based concentrate:alfalfa diet were mixed (10 mL fluid) in 18 x 150 mm crimp top tubes with 0.5 of high nitrate-containing barley (Hordeum vulgare; containing 272 µmol nitrate per g forage dry matter), and NPA- or NPOH- containing milkvetch forages (Astragalus canadensis and Astragalus miser containing 80 and 174 soluble µmol NPA or NPOH/g forage dry matter respectively). Incubations containing 0.5 g alfalfa (Medicago sativa) were used as controls. Tubes (3 per each respective forage) were capped and incubated anaerobically (using oxygen free carbon dioxide) for 24 h at 39oC after which time amounts of total gas produced were measured via volume displacement and headspace samples were analyzed by gas chromatography to determine concentrations of hydrogen and methane. Fluid samples were analyzed by gas chromatography to measure accumulations of fermentation acids. A completely randomized analysis of variance revealed that the nitrate-containing barley and both the NPA- and the NPOH-containing milkvetches significantly decreased methane production, by > 50%, when compared to methane produced by populations incubated similarly with alfalfa (70.4 ± 3.6 µmol/ml incubation fluid). Accumulations of hydrogen, which are typically increased when methane production is inhibited, by incubations with the nitrate-containing barley and the NPA- and NPOH-containing milkvetches did not differ from accumulations observed in the alfalfa controls (0.09 ± 0.04 µmol/mL incubation fluid). Accumulations of fermentation acids produced in the incubations containing the high-nitrate barley and the NPA- and NPOH-containing milkvetches likewise did not differ from accumulations observed in incubations containing alfalfa (123.5 ± 10.8, 36.0 ± 3.0, 17.1 ± 1.5, 3.5 ± 0.3, 2.3 ± 0.2, 2.2 ± 0.2 µmol/mL incubation fluid for acetate, propionate, butyrate, valerate, isobutyrate, and isovalerate, respectively). This finding indicates the microbial populations did not compensate for the decreased methane production via compensatory changes in production of fermentative acids. Stoichiometric estimation of fermentation balance revealed that > 77% of reducing equivalents generated during fermentation of the forages were recovered in fermentation products and the recoveries did not differ between the alfalfa incubations and those with the high-nitrate barley or the NPA- or NPOH-containing milkvetches. Stoichiometric estimates of amounts of hexose fermented similarly did not differ between the nitrate-, NPA and NPOH-containing incubations and those with the alfalfa, averaging 99.6 ± 37.2 µmol hexose consumed/mL of incubation fluid. These results suggest that forages containing nitrate, NPA or NPOH may be useful to reduce methane emissions of grazing ruminants provided risks of toxicity can be effectively managed.

Keywords: nitrate, nitropropanol, nitropropionic acid, rumen methane emissions

Procedia PDF Downloads 100
194 Behavioral Analysis of Anomalies in Intertemporal Choices Through the Concept of Impatience and Customized Strategies for Four Behavioral Investor Profiles With an Application of the Analytic Hierarchy Process: A Case Study

Authors: Roberta Martino, Viviana Ventre

Abstract:

The Discounted Utility Model is the essential reference for calculating the utility of intertemporal prospects. According to this model, the value assigned to an outcome is the smaller the greater the distance between the moment in which the choice is made and the instant in which the outcome is perceived. This diminution determines the intertemporal preferences of the individual, the psychological significance of which is encapsulated in the discount rate. The classic model provides a discount rate of linear or exponential nature, necessary for temporally consistent preferences. Empirical evidence, however, has proven that individuals apply discount rates with a hyperbolic nature generating the phenomenon of intemporal inconsistency. What this means is that individuals have difficulty managing their money and future. Behavioral finance, which analyzes the investor's attitude through cognitive psychology, has made it possible to understand that beyond individual financial competence, there are factors that condition choices because they alter the decision-making process: behavioral bias. Since such cognitive biases are inevitable, to improve the quality of choices, research has focused on a personalized approach to strategies that combines behavioral finance with personality theory. From the considerations, it emerges the need to find a procedure to construct the personalized strategies that consider the personal characteristics of the client, such as age or gender, and his personality. The work is developed in three parts. The first part discusses and investigates the weight of the degree of impatience and impatience decrease in the anomalies of the discounted utility model. Specifically, the degree of decrease in impatience quantifies the impact that emotional factors generated by haste and financial market agitation have on decision making. The second part considers the relationship between decision making and personality theory. Specifically, four behavioral categories associated with four categories of behavioral investors are considered. This association allows us to interpret intertemporal choice as a combination of bias and temperament. The third part of the paper presents a method for constructing personalized strategies using Analytic Hierarchy Process. Briefly: the first level of the analytic hierarchy process considers the goal of the strategic plan; the second level considers the four temperaments; the third level compares the temperaments with the anomalies of the discounted utility model; and the fourth level contains the different possible alternatives to be selected. The weights of the hierarchy between level 2 and level 3 are constructed considering the degrees of decrease in impatience derived for each temperament with an experimental phase. The results obtained confirm the relationship between temperaments and anomalies through the degree of decrease in impatience and highlight that the actual impact of emotions in decision making. Moreover, it proposes an original and useful way to improve financial advice. Inclusion of additional levels in the Analytic Hierarchy Process can further improve strategic personalization.

Keywords: analytic hierarchy process, behavioral finance anomalies, intertemporal choice, personalized strategies

Procedia PDF Downloads 73
193 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 49
192 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 76
191 Fake News Domination and Threats on Democratic Systems

Authors: Laura Irimies, Cosmin Irimies

Abstract:

The public space all over the world is currently confronted with the aggressive assault of fake news that have lately impacted public agenda setting, collective decisions and social attitudes. Top leaders constantly call out most mainstream news as “fake news” and the public opinion get more confused. "Fake news" are generally defined as false, often sensational, information disseminated under the guise of news reporting and has been declared the word of the year 2017 by Collins Dictionary and it also has been one of the most debated socio-political topics of recent years. Websites which, deliberately or not, publish misleading information are often shared on social media where they essentially increase their reach and influence. According to international reports, the exposure to fake news is an undeniable reality all over the world as the exposure to completely invented information goes up to the 31 percent in the US, and it is even bigger in Eastern Europe countries, such as Hungary (42%) and Romania (38%) or in Mediterranean countries, such as Greece (44%) or Turkey (49%), and lower in Northern and Western Europe countries – Germany (9%), Denmark (9%) or Holland (10%). While the study of fake news (mechanism and effects) is still in its infancy, it has become truly relevant as the phenomenon seems to have a growing impact on democratic systems. Studies conducted by the European Commission show that 83% of respondents out of a total of 26,576 interviewees consider the existence of news that misrepresent reality as a threat for democracy. Studies recently conducted at Arizona State University show that people with higher education can more easily spot fake headlines, but over 30 percent of them can still be trapped by fake information. If we were to refer only to some of the most recent situations in Romania, fake news issues and hidden agenda suspicions related to the massive and extremely violent public demonstrations held on August 10th, 2018 with a strong participation of the Romanian diaspora have been massively reflected by the international media and generated serious debates within the European Commission. Considering the above framework, the study raises four main research questions: 1. Is fake news a problem or just a natural consequence of mainstream media decline and the abundance of sources of information? 2. What are the implications for democracy? 3. Can fake news be controlled without restricting fundamental human rights? 4. How could the public be properly educated to detect fake news? The research uses mostly qualitative but also quantitative methods, content analysis of studies, websites and media content, official reports and interviews. The study will prove the real threat fake news represent and also the need for proper media literacy education and will draw basic guidelines for developing a new and essential skill: that of detecting fake in news in a society overwhelmed by sources of information that constantly roll massive amounts of information increasing the risk of misinformation and leading to inadequate public decisions that could affect democratic stability.

Keywords: agenda setting democracy, fake news, journalism, media literacy

Procedia PDF Downloads 99
190 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes

Authors: Karolina Wieczorek, Sophie Wiliams

Abstract:

Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.

Keywords: automated, algorithm, NLP, COVID-19

Procedia PDF Downloads 71
189 The Benefits of Using Transformative Inclusion Practices and Action Research in Teaching Development and Active Participation of Roma Students in the Kindergarten

Authors: Beazidou Eleftheria

Abstract:

Roma children face discrimination in schools where they are the minority. On the other hand, teachers do not identify the specific needs of Roma students for educational and social inclusion and generally use a very restricted repertoire of insufficient strategies for helping them. Modern classrooms can and should look different. Therefore, engaging in transformational learning with young children is a deliberate choice. Transformation implies a different way of thinking and acting. This requires new knowledge that incorporates multiple perspectives and actions in order to generate experiences for further learning. In this way, we build knowledge based on empirical examples, and we share what works efficiently. The present research aims at assisting the participating teachers to improve their teaching inclusive practice, thus ultimately benefiting their students. To increase the impact of transformative efforts with a ‘new’ teaching approach, we implemented a classroom-based action research program for over six months in five kindergarten classrooms with Roma and non-Roma students. More specifically, we explore a) information about participants’ experience of the program and b) if the program is successful in helping participants to change their teaching practice. Action research is, by definition, a form of inquiry that is intended to have both action and research outcomes. The action research process that we followed included five phases: 1. Defining the problem: As teachers said, the Roma students are often the most excluded group in schools (Low social interaction and participation in classroom activities) 2. Developing a plan to address the problem: We decided to address the problem by improving/transforming the inclusive practices that teachers implemented in their classrooms. 3. Acting: implementing the plan: We incorporated new activities for all students with the goals: a) All students being passionate about their learning, b) Teachers must investigate issues in the educational context that are personal and meaningful to children's growth, c) Establishment of a new module for values and skills for all students, d) Raising awareness in culture of Roma, e) Teaching students to reflect. 4. Observing: We explore the potential for transformation in the action research program that involves observations of students’ participation in classroom activities and peer interaction. – thus, generated evidence from data. 5. Reflecting and acting: After analyzing and evaluating the outcomes from data and considering the obstacles during the program’s implementation, we established new goals for the next steps of the program. These are centered in: a) the literacy skills of Roma students and b) the transformation of teacher’s perceptions and believes, which have a powerful impact on their willingness to adopt new teaching strategies. The final evaluation of the program showed a significant achievement of the transformative goals, which were related to the active participation of the Roma students in classroom activities and peer interaction, while the activities which were related to literacy skills did not have the expected results. In conclusion, children were equipped with relevant knowledge and skills to raise their potential and contribute to wider societal development as well as teachers improved their teaching inclusive practice.

Keywords: action research, inclusive practices, kindergarten, transformation

Procedia PDF Downloads 64
188 Diversification of Productivity of the Oxfordian Subtidal Carbonate Factory in the Holy Cross Mountains

Authors: Radoslaw Lukasz Staniszewski

Abstract:

The aim of the research was to verify lateral extent and thickness variability of individual limestone layers within early-Jurassic medium- and thick-bedded limestone interbedded with marlstones. Location: The main research area is located in the south-central part of Poland in the south-western part of Permo-Mesozoic margin of the Holy Cross Mountains. It includes outcroppings located on the line between Mieczyn and Wola Morawicka. The analyses were carried out on six profiles (Mieczyn, Gniezdziska, Tokarnia, Wola Morawicka, Morawica and Wolica) representing three early-Jurassic links: Jasna Gora layers, grey limestone, Morawica limestone. Additionally, an attempt was made to correlate the thickness sequence from the Holy Cross Mountains to the profile from the quarry in Zawodzie located 3 km east of Czestochowa. The distance between the outermost profiles is 122 km in a straight line. Methodology of research: The Callovian-Oxfordian border was taken as the reference point during the correlation. At the same time, ammonite-based stratigraphic studies were carried out, which allowed to identify individual packages in the remote outcroppings. The analysis of data collected during fieldwork was mainly devoted to the correlation of thickness sequences of limestone layers in subsequent profiles. In order to check the objectivity of the subsequent outcroppings, the profiles have been presented in the form of the thickness functions of the subsequent layers. The generated functions were auto-correlated, and the Pearson correlation coefficient was calculated. The next step in the research was to statistically determine the percentage increment of the individual layers thickness in the subsequent profiles, and on this basis to plot the function of relative carbonate productivity. Results: The result of the above-mentioned procedures consists in illustrating the extent of 34 rock layers across the examined area in demonstrating the repeatability of their success in subsequent outcroppings. It can also be observed that the thickness of individual layers in the Holy Cross Mountains is increasing from north-west towards south-east. Despite changes in the thickness of the layers in the profiles, their relations within the sequence remain constant. The lowest matching ratio of thickness sequence calculated using the Pearson correlation coefficient formula is 0.67, while the highest is 0.84. The thickness of individual layers changes between 4% and 230% over the examined area. Interpretation: Layers in the outcroppings covered by the research show continuity throughout the examined area and it is possible to precisely correlate them, which means that the process determining the formation of the layers was regional and probably included both the fringe of the Holy Cross Mountains and the north-eastern part of the Krakow-Czestochowa Jura Upland. Local changes in the sedimentation environment affecting the productivity of the subtidal carbonate factory only cause the thickness of the layers to change without altering the thickness proportions of the profiles. Based on the percentage of changes in the thickness of individual layers in the subsequent profiles, it can be concluded that the local productivity of the subtidal carbonate factory is increasing logarithmically.

Keywords: Oxfordian, Holy Cross Mountains, carbonate factory, Limestone

Procedia PDF Downloads 94
187 The Establishment of Primary Care Networks (England, UK) Throughout the COVID-19 Pandemic: A Qualitative Exploration of Workforce Perceptions

Authors: Jessica Raven Gates, Gemma Wilson-Menzfeld, Professor Alison Steven

Abstract:

In 2019, the Primary Care system in the UK National Health Service (NHS) was subject to reform and restructuring. Primary Care Networks (PCNs) were established, which aligned with a trend towards integrated care both within the NHS and internationally. The introduction of PCNs brought groups of GP practices in a locality together, to operate as a network, build on existing services and collaborate at a larger scale. PCNs were expected to bring a range of benefits to patients and address some of the workforce pressures in the NHS, through an expanded and collaborative workforce. The early establishment of PCNs was disrupted by the emerging COVID-19 pandemic. This study, set in the context of the pandemic, aimed to explore experiences of the PCN workforce, and their perceptions of the establishment of PCNs. Specific objectives focussed on examining factors perceived as enabling or hindering the success of a PCN, the impact on day-to-day work, the approach to implementing change, and the influence of the COVID-19 pandemic upon PCN development. This study is part of a three-phase PhD project that utilized qualitative approaches and was underpinned by social constructionist philosophy. Phase 1: a systematic narrative review explored the provision of preventative healthcare services in UK primary settings and examined facilitators and barriers to delivery as experienced by the workforce. Phase 2: informed by the findings of phase 1, semi-structured interviews were conducted with fifteen participants (PCN workforce). Phase 3: follow-up interviews were conducted with original participants to examine any changes to their experiences and perceptions of PCNs. Three main themes span across phases 2 and 3 and were generated through a Framework Analysis approach: 1) working together at scale, 2) network infrastructure, and 3) PCN leadership. Findings suggest that through efforts to work together at scale and collaborate as a network, participants have broadly accepted the concept of PCNs. However, the workforce has been hampered by system design and system complexity. Operating against such barriers has led to a negative psychological impact on some PCN leaders and others in the PCN workforce. While the pandemic undeniably increased pressure on healthcare systems around the world, it also acted as a disruptor, offering a glimpse into how collaboration in primary care can work well. Through the integration of findings from all phases, a new theoretical model has been developed, which conceptualises the findings from this Ph.D. study and demonstrates how the workforce has experienced change associated with the establishment of PCNs. The model includes a contextual component of the COVID-19 pandemic and has been informed by concepts from Complex Adaptive Systems theory. This model is the original contribution to knowledge of the PhD project, alongside recommendations for practice, policy and future research. This study is significant in the realm of health services research, and while the setting for this study is the UK NHS, the findings will be of interest to an international audience as the research provides insight into how the healthcare workforce may experience imposed policy and service changes.

Keywords: health services research, qualitative research, NHS workforce, primary care

Procedia PDF Downloads 35
186 Developing Thai-UK Double Degree Programmes: An Exploratory Study Identifying Challenges, Competing Interests and Risks

Authors: Joy Tweed, Jon Pike

Abstract:

In Thailand, a 4.0 policy has been initiated that is designed to prepare and train an appropriate workforce to support the move to a value-based economy. One aspect of support for this policy is a project to encourage the creation of double degree programmes, specifically between Thai and UK universities. This research into the project, conducted with its key players, explores the factors that can either enable or hinder the development of such programmes. It is an area that has received little research attention to date. Key findings focus on differences in quality assurance requirements, attitudes to benefits, risks, and committed levels of institutional support, thus providing valuable input into future policy making. The Transnational Education (TNE) Development Project was initiated in 2015 by the British Council, in conjunction with the Office for Higher Education Commission (OHEC), Thailand. The purpose of the project was to facilitate opportunities for Thai Universities to partner with UK Universities so as to develop double degree programme models. In this arrangement, the student gains both a UK and a Thai qualification, spending time studying in both countries. Twenty-two partnerships were initiated via the project. Utilizing a qualitative approach, data sources included participation in TNE project workshops, peer reviews, and over 20 semi-structured interviews conducted with key informants within the participating UK and Thai universities. Interviews were recorded, transcribed, and analysed for key themes. The research has revealed that the strength of the relationship between the two partner institutions is critical. Successful partnerships are often built on previous personal contact, have senior-level involvement and are strengthened by partnership on different levels, such as research, student exchange, and other forms of mobility. The support of the British Council was regarded as a key enabler in developing these types of projects for those universities that had not been involved in TNE previously. The involvement of industry is apparent in programmes that have high scientific content but not well developed in other subject areas. Factors that hinder the development of partnership programmes include the approval processes and quality requirements of each institution. Significant differences in fee levels between Thai and UK universities provide a challenge and attempts to bridge them require goodwill on the part of the latter that may be difficult to realise. This research indicates the key factors to which attention needs to be given when developing a TNE programme. Early attention to these factors can reduce the likelihood that the partnership will fail to develop. Representatives in both partner universities need to understand their respective processes of development and approval. The research has important practical implications for policy-makers and planners involved with TNE, not only in relation to the specific TNE project but also more widely in relation to the development of TNE programmes in other countries and other subject areas. Future research will focus on assessing the success of the double degree programmes generated by the TNE Development Project from the perspective of universities, policy makers, and industry partners.

Keywords: double-degree, internationalization, partnerships, Thai-UK

Procedia PDF Downloads 84
185 Surface Acoustic Waves Nebulisation of Liposomes Manufactured in situ for Pulmonary Drug Delivery

Authors: X. King, E. Nazarzadeh, J. Reboud, J. Cooper

Abstract:

Pulmonary diseases, such as asthma, are generally treated by the inhalation of aerosols that has the advantage of reducing the off-target (e.g., toxicity) effects associated with systemic delivery in blood. Effective respiratory drug delivery requires a droplet size distribution between 1 and 5 µm. Inhalation of aerosols with wide droplet size distribution, out of this range, results in deposition of drug in not-targeted area of the respiratory tract, introducing undesired side effects on the patient. In order to solely deliver the drug in the lower branches of the lungs and release it in a targeted manner, a control mechanism to produce the aerosolized droplets is required. To regulate the drug release and to facilitate the uptake from cells, drugs are often encapsulated into protective liposomes. However, a multistep process is required for their formation, often performed at the formulation step, therefore limiting the range of available drugs or their shelf life. Using surface acoustic waves (SAWs), a pulmonary drug delivery platform was produced, which enabled the formation of defined size aerosols and the formation of liposomes in situ. SAWs are mechanical waves, propagating along the surface of a piezoelectric substrate. They were generated using an interdigital transducer on lithium niobate with an excitation frequency of 9.6 MHz at a power of 1W. Disposable silicon superstrates were etched using photolithography and dry etch processes to create an array of cylindrical through-holes with different diameters and pitches. Superstrates were coupled with the SAW substrate through water-based gel. As the SAW propagates on the superstrate, it enables nebulisation of a lipid solution deposited onto it. The cylindrical cavities restricted the formation of large drops in the aerosol, while at the same time unilamellar liposomes were created. SAW formed liposomes showed a higher monodispersity compared to the control sample, as well as displayed, a faster production rate. To test the aerosol’s size, dynamic light scattering and laser diffraction methods were used, both showing the size control of the aerosolised particles. The use of silicon superstate with cavity size of 100-200 µm, produced an aerosol with a mean droplet size within the optimum range for pulmonary drug delivery, containing the liposomes in which the medicine could be loaded. Additionally, analysis of liposomes with Cryo-TEM showed formation of vesicles with narrow size distribution between 80-100 nm and optimal morphology in order to be used for drug delivery. Encapsulation of nucleic acids in liposomes through the developed SAW platform was also investigated. In vitro delivery of siRNA and DNA Luciferase were achieved using A549 cell line, lung carcinoma from human. In conclusion, SAW pulmonary drug delivery platform was engineered, in order to combine multiple time consuming steps (formation of liposomes, drug loading, nebulisation) into a unique platform with the aim of specifically delivering the medicament in a targeted area, reducing the drug’s side effects.

Keywords: acoustics, drug delivery, liposomes, surface acoustic waves

Procedia PDF Downloads 91
184 Characterization of Potato Starch/Guar Gum Composite Film Modified by Ecofriendly Cross-Linkers

Authors: Sujosh Nandi, Proshanta Guha

Abstract:

Synthetic plastics are preferred for food packaging due to high strength, stretch-ability, good water vapor and gas barrier properties, transparency and low cost. However, environmental pollution generated by these synthetic plastics is a major concern of modern human civilization. Therefore, use of biodegradable polymers as a substitute for synthetic non-biodegradable polymers are encouraged to be used even after considering drawbacks related to mechanical and barrier properties of the films. Starch is considered one of the potential raw material for the biodegradable polymer, encounters poor water barrier property and mechanical properties due to its hydrophilic nature. That apart, recrystallization of starch molecules occurs during aging which decreases flexibility and increases elastic modulus of the film. The recrystallization process can be minimized by blending of other hydrocolloids having similar structural compatibility, into the starch matrix. Therefore, incorporation of guar gum having a similar structural backbone, into the starch matrix can introduce a potential film into the realm of biodegradable polymer. However, hydrophilic nature of both starch and guar gum, water barrier property of the film is low. One of the prospective solution to enhance this could be modification of the potato starch/guar gum (PSGG) composite film using cross-linker. Over the years, several cross-linking agents such as phosphorus oxychloride, sodium trimetaphosphate, etc. have been used to improve water vapor permeability (WVP) of the films. However, these chemical cross-linking agents are toxic, expensive and take longer time to degrade. Therefore, naturally available carboxylic acid (tartaric acid, malonic acid, succinic acid, etc.) had been used as a cross-linker and found that water barrier property enhanced substantially. As per our knowledge, no works have been reported with tartaric acid and succinic acid as a cross-linking agent blended with the PSGG films. Therefore, the objective of the present study was to examine the changes in water vapor barrier property and mechanical properties of the PSGG films after cross-linked with tartaric acid (TA) and succinic acid (SA). The cross-linkers were blended with PSGG film-forming solution at four different concentrations (4, 8, 12 & 16%) and cast on teflon plate at 37°C for 20 h. From the fourier-transform infrared spectroscopy (FTIR) study of the developed films, a band at 1720cm-1 was observed which is attributed to the formation of ester group in the developed films. On the other hand, it was observed that tensile strength (TS) of the cross-linked film decreased compared to non-cross linked films, whereas strain at break increased by several folds. Moreover, the results depicted that tensile strength diminished with increasing the concentration of TA or SA and lowest TS (1.62 MPa) was observed for 16% SA. That apart, maximum strain at break was also observed for TA at 16% and the reason behind this could be a lesser degree of crystallinity of the TA cross-linked films compared to SA. However, water vapor permeability of succinic acid cross-linked film was reduced significantly, but it was enhanced significantly by addition of tartaric acid.

Keywords: cross linking agent, guar gum, organic acids, potato starch

Procedia PDF Downloads 90