Search results for: performance comparison
164 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)
Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram
Abstract:
Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition
Procedia PDF Downloads 246163 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator
Authors: Kaushikk Iyer, Richard M Hall, David Keeling
Abstract:
Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator
Procedia PDF Downloads 172162 Smart and Active Package Integrating Printed Electronics
Authors: Joana Pimenta, Lorena Coelho, José Silva, Vanessa Miranda, Jorge Laranjeira, Rui Soares
Abstract:
In this paper, the results of R&D on an innovative food package for increased shelf-life are presented. SAP4MA aims at the development of a printed active device that enables smart packaging solutions for food preservation, targeting the extension of the shelf-life of the packed food through the controlled release of active natural antioxidant agents at the onset of the food degradation process. To do so, SAP4MA focuses on the development of active devices such as printed heaters and batteries/supercapacitors in a label format to be integrated on packaging lids during its injection molding process, promoting the passive release of natural antioxidants after the product is packed, during transportation and in the shelves, and actively when the end-user activates the package, just prior to consuming the product at home. When the active device present on the lid is activated, the release of the natural antioxidants embedded in the inner layer of the packaging lid in direct contact with the headspace atmosphere of the food package starts. This approach is based on the use of active functional coatings composed of nano encapsulated active agents (natural antioxidants species) in the prevention of the oxidation of lipid compounds in food by agents such as oxygen. Thus keeping the product quality during the shelf-life, not only when the user opens the packaging, but also during the period from food packaging up until the purchase by the consumer. The active systems that make up the printed smart label, heating circuit, and battery were developed using screen-printing technology. These systems must operate under the working conditions associated with this application. The printed heating circuit was studied using three different substrates and two different conductive inks. Inks were selected, taking into consideration that the printed circuits will be subjected to high pressures and temperatures during the injection molding process. The circuit must reach a homogeneous temperature of 40ºC in the entire area of the lid of the food tub, promoting a gradual and controlled release of the antioxidant agents. In addition, the circuit design involves a high level of study in order to guarantee maximum performance after the injection process and meet the specifications required by the control electronics component. Furthermore, to characterize the different heating circuits, the electrical resistance promoted by the conductive ink and the circuit design, as well as the thermal behavior of printed circuits on different substrates, were evaluated. In the injection molding process, the serpentine-shaped design developed for the heating circuit was able to resolve the issues connected to the injection point; in addition, the materials used in the support and printing had high mechanical resistance against the pressure and temperature inherent to the injection process. Acknowledgment: This research has been carried out within the Project “Smart and Active Packing for Margarine Product” (SAP4MA) running under the EURIPIDES Program being co-financed by COMPETE 2020 – the Operational Programme for Competitiveness and Internationalization and under Portugal 2020 through the European Regional Development Fund (ERDF).Keywords: smart package, printed heat circuits, printed batteries, flexible and printed electronic
Procedia PDF Downloads 110161 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction
Authors: M. D. Haneef, R. B. Randall, Z. Peng
Abstract:
Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction
Procedia PDF Downloads 310160 Concept Mapping to Reach Consensus on an Antibiotic Smart Use Strategy Model to Promote and Support Appropriate Antibiotic Prescribing in a Hospital, Thailand
Authors: Phenphak Horadee, Rodchares Hanrinth, Saithip Suttiruksa
Abstract:
Inappropriate use of antibiotics has happened in several hospitals, Thailand. Drug use evaluation (DUE) is one strategy to overcome this difficulty. However, most community hospitals still encounter incomplete evaluation resulting overuse of antibiotics with high cost. Consequently, drug-resistant bacteria have been rising due to inappropriate antibiotic use. The aim of this study was to involve stakeholders in conceptualizing, developing, and prioritizing a feasible intervention strategy to promote and support appropriate antibiotic prescribing in a community hospital, Thailand. Study antibiotics included four antibiotics such as Meropenem, Piperacillin/tazobactam, Amoxicillin/clavulanic acid, and Vancomycin. The study was conducted for the 1-year period between March 1, 2018, and March 31, 2019, in a community hospital in the northeastern part of Thailand. Concept mapping was used in a purposive sample, including doctors (one was an administrator), pharmacists, and nurses who involving drug use evaluation of antibiotics. In-depth interviews for each participant and survey research were conducted to seek the problems for inappropriate use of antibiotics based on drug use evaluation system. Seventy-seven percent of DUE reported appropriate antibiotic prescribing, which still did not reach the goal of 80 percent appropriateness. Meropenem led other antibiotics for inappropriate prescribing. The causes of the unsuccessful DUE program were classified into three themes such as personnel, lack of public relation and communication, and unsupported policy and impractical regulations. During the first meeting, stakeholders (n = 21) expressed the generation of interventions. During the second meeting, participants who were almost the same group of people in the first meeting (n = 21) were requested to independently rate the feasibility and importance of each idea and to categorize them into relevant clusters to facilitate multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the idealist, cluster list, point map, point rating map, cluster map, and cluster rating map. All of these were distributed to participants (n = 21) during the third meeting to reach consensus on an intervention model. The final proposed intervention strategy included 29 feasible and crucial interventions in seven clusters: development of information technology system, establishing policy and taking it into the action plan, proactive public relations of the policy, action plan and workflow, in cooperation of multidisciplinary teams in drug use evaluation, work review and evaluation with performance reporting, promoting and developing professional and clinical skill for staff with training programs, and developing practical drug use evaluation guideline for antibiotics. These interventions are relevant and fit to several intervention strategies for antibiotic stewardship program in many international organizations such as participation of the multidisciplinary team, developing information technology to support antibiotic smart use, and communication. These interventions were prioritized for implementation over a 1-year period. Once the possibility of each activity or plan is set up, the proposed program could be applied and integrated into hospital policy after evaluating plans. Effectiveness of each intervention could be promoted to other community hospitals to promote and support antibiotic smart use.Keywords: antibiotic, concept mapping, drug use evaluation, multidisciplinary teams
Procedia PDF Downloads 121159 New Territories: Materiality and Craft from Natural Systems to Digital Experiments
Authors: Carla Aramouny
Abstract:
Digital fabrication, between advancements in software and machinery, is pushing practice today towards more complexity in design, allowing for unparalleled explorations. It is giving designers the immediate capacity to apply their imagined objects into physical results. Yet at no time have questions of material knowledge become more relevant and crucial, as technological advancements approach a radical re-invention of the design process. As more and more designers look towards tactile crafts for material know-how, an interest in natural behaviors has also emerged trying to embed intelligence from nature into the designed objects. Concerned with enhancing their immediate environment, designers today are pushing the boundaries of design by bringing in natural systems, materiality, and advanced fabrication as essential processes to produce active designs. New Territories, a yearly architecture and design course on digital design and materiality, allows students to explore processes of digital fabrication in intersection with natural systems and hands-on experiments. This paper will highlight the importance of learning from nature and from physical materiality in a digital design process, and how the simultaneous move between the digital and physical realms has become an essential design method. It will detail the work done over the course of three years, on themes of natural systems, crafts, concrete plasticity, and active composite materials. The aim throughout the course is to explore the design of products and active systems, be it modular facades, intelligent cladding, or adaptable seating, by embedding current digital technologies with an understanding of natural systems and a physical know-how of material behavior. From this aim, three main themes of inquiry have emerged through the varied explorations across the three years, each one approaching materiality and digital technologies through a different lens. The first theme involves crossing the study of naturals systems as precedents for intelligent formal assemblies with traditional crafts methods. The students worked on designing performative facade systems, starting from the study of relevant natural systems and a specific craft, and then using parametric modeling to develop their modular facades. The second theme looks at the cross of craft and digital technologies through form-finding techniques and elastic material properties, bringing in flexible formwork into the digital fabrication process. Students explored concrete plasticity and behaviors with natural references, as they worked on the design of an exterior seating installation using lightweight concrete composites and complex casting methods. The third theme brings in bio-composite material properties with additive fabrication and environmental concerns to create performative cladding systems. Students experimented in concrete composites materials, biomaterials and clay 3D printing to produce different cladding and tiling prototypes that actively enhance their immediate environment. This paper thus will detail the work process done by the students under these three themes of inquiry, describing their material experimentation, digital and analog design methodologies, and their final results. It aims to shed light on the persisting importance of material knowledge as it intersects with advanced digital fabrication and the significance of learning from natural systems and biological properties to embed an active performance in today’s design process.Keywords: digital fabrication, design and craft, materiality, natural systems
Procedia PDF Downloads 128158 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation
Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira
Abstract:
The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy
Procedia PDF Downloads 127157 Solid Polymer Electrolyte Membranes Based on Siloxane Matrix
Authors: Natia Jalagonia, Tinatin Kuchukhidze
Abstract:
Polymer electrolytes (PE) play an important part in electrochemical devices such as batteries and fuel cells. To achieve optimal performance, the PE must maintain a high ionic conductivity and mechanical stability at both high and low relative humidity. The polymer electrolyte also needs to have excellent chemical stability for long and robustness. According to the prevailing theory, ionic conduction in polymer electrolytes is facilitated by the large-scale segmental motion of the polymer backbone, and primarily occurs in the amorphous regions of the polymer electrolyte. Crystallinity restricts polymer backbone segmental motion and significantly reduces conductivity. Consequently, polymer electrolytes with high conductivity at room temperature have been sought through polymers which have highly flexible backbones and have largely amorphous morphology. The interest in polymer electrolytes was increased also by potential applications of solid polymer electrolytes in high energy density solid state batteries, gas sensors and electrochromic windows. Conductivity of 10-3 S/cm is commonly regarded as a necessary minimum value for practical applications in batteries. At present, polyethylene oxide (PEO)-based systems are most thoroughly investigated, reaching room temperature conductivities of 10-7 S/cm in some cross-linked salt in polymer systems based on amorphous PEO-polypropylene oxide copolymers.. It is widely accepted that amorphous polymers with low glass transition temperatures Tg and a high segmental mobility are important prerequisites for high ionic conductivities. Another necessary condition for high ionic conductivity is a high salt solubility in the polymer, which is most often achieved by donors such as ether oxygen or imide groups on the main chain or on the side groups of the PE. It is well established also that lithium ion coordination takes place predominantly in the amorphous domain, and that the segmental mobility of the polymer is an important factor in determining the ionic mobility. Great attention was pointed to PEO-based amorphous electrolyte obtained by synthesis of comb-like polymers, by attaching short ethylene oxide unit sequences to an existing amorphous polymer backbone. The aim of presented work is to obtain of solid polymer electrolyte membranes using PMHS as a matrix. For this purpose the hydrosilylation reactions of α,ω-bis(trimethylsiloxy)methyl¬hydrosiloxane with allyl triethylene-glycol mo¬nomethyl ether and vinyltriethoxysilane at 1:28:7 ratio of initial com¬pounds in the presence of Karstedt’s catalyst, platinum hydrochloric acid (0.1 M solution in THF) and platinum on the carbon catalyst in 50% solution of anhydrous toluene have been studied. The synthesized olygomers are vitreous liquid products, which are well soluble in organic solvents with specific viscosity ηsp ≈ 0.05 - 0.06. The synthesized olygomers were analysed with FTIR, 1H, 13C, 29Si NMR spectroscopy. Synthesized polysiloxanes were investigated with wide-angle X-ray, gel-permeation chromatography, and DSC analyses. Via sol-gel processes of doped with lithium trifluoromethylsulfonate (triflate) or lithium bis¬(trifluoromethylsulfonyl)¬imide polymer systems solid polymer electrolyte membranes have been obtained. The dependence of ionic conductivity as a function of temperature and salt concentration was investigated and the activation energies of conductivity for all obtained compounds are calculatedKeywords: synthesis, PMHS, membrane, electrolyte
Procedia PDF Downloads 258156 „Real and Symbolic in Poetics of Multiplied Screens and Images“
Authors: Kristina Horvat Blazinovic
Abstract:
In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time
Procedia PDF Downloads 174155 Active Filtration of Phosphorus in Ca-Rich Hydrated Oil Shale Ash Filters: The Effect of Organic Loading and Form of Precipitated Phosphatic Material
Authors: Päärn Paiste, Margit Kõiv, Riho Mõtlep, Kalle Kirsimäe
Abstract:
For small-scale wastewater management, the treatment wetlands (TWs) as a low cost alternative to conventional treatment facilities, can be used. However, P removal capacity of TW systems is usually problematic. P removal in TWs is mainly dependent on the physico–chemical and hydrological properties of the filter material. Highest P removal efficiency has been shown trough Ca-phosphate precipitation (i.e. active filtration) in Ca-rich alkaline filter materials, e.g. industrial by-products like hydrated oil shale ash (HOSA), metallurgical slags. In this contribution we report preliminary results of a full-scale TW system using HOSA material for P removal for a municipal wastewater at Nõo site, Estonia. The main goals of this ongoing project are to evaluate: a) the long-term P removal efficiency of HOSA using real waste water; b) the effect of high organic loading rate; c) variable P-loading effects on the P removal mechanism (adsorption/direct precipitation); and d) the form and composition of phosphate precipitates. Onsite full-scale experiment with two concurrent filter systems for treatment of municipal wastewater was established in September 2013. System’s pretreatment steps include septic tank (2 m2) and vertical down-flow LECA filters (3 m2 each), followed by horizontal subsurface HOSA filters (effective volume 8 m3 each). Overall organic and hydraulic loading rates of both systems are the same. However, the first system is operated in a stable hydraulic loading regime and the second in variable loading regime that imitates the wastewater production in an average household. Piezometers for water and perforated sample containers for filter material sampling were incorporated inside the filter beds to allow for continuous in-situ monitoring. During the 18 months of operation the median removal efficiency (inflow to outflow) of both systems were over 99% for TP, 93% for COD and 57% for TN. However, we observed significant differences in the samples collected in different points inside the filter systems. In both systems, we observed development of preferred flow paths and zones with high and low loadings. The filters show formation and a gradual advance of a “dead” zone along the flow path (zone with saturated filter material characterized by ineffective removal rates), which develops more rapidly in the system working under variable loading regime. The formation of the “dead” zone is accompanied by the growth of organic substances on the filter material particles that evidently inhibit the P removal. Phase analysis of used filter materials using X-ray diffraction method reveals formation of minor amounts of amorphous Ca-phosphate precipitates. This finding is supported by ATR-FTIR and SEM-EDS measurements, which also reveal Ca-phosphate and authigenic carbonate precipitation. Our first experimental results demonstrate that organic pollution and loading regime significantly affect the performance of hydrated ash filters. The material analyses also show that P is incorporated into a carbonate substituted hydroxyapatite phase.Keywords: active filtration, apatite, hydrated oil shale ash, organic pollution, phosphorus
Procedia PDF Downloads 275154 Investigation of Pu-238 Heat Source Modifications to Increase Power Output through (α,N) Reaction-Induced Fission
Authors: Alex B. Cusick
Abstract:
The objective of this study is to improve upon the current ²³⁸PuO₂ fuel technology for space and defense applications. Modern RTGs (radioisotope thermoelectric generators) utilize the heat generated from the radioactive decay of ²³⁸Pu to create heat and electricity for long term and remote missions. Application of RTG technology is limited by the scarcity and expense of producing the isotope, as well as the power output which is limited to only a few hundred watts. The scarcity and expense make the efficient use of ²³⁸Pu absolutely necessary. By utilizing the decay of ²³⁸Pu, not only to produce heat directly but to also indirectly induce fission in ²³⁹Pu (which is already present within currently used fuel), it is possible to see large increases in temperature which allows for a more efficient conversion to electricity and a higher power-to-weight ratio. This concept can reduce the quantity of ²³⁸Pu necessary for these missions, potentially saving millions on investment, while yielding higher power output. Current work investigating radioisotope power systems have focused on improving efficiency of the thermoelectric components and replacing systems which produce heat by virtue of natural decay with fission reactors. The technical feasibility of utilizing (α,n) reactions to induce fission within current radioisotopic fuels has not been investigated in any appreciable detail, and our study aims to thoroughly investigate the performance of many such designs, develop those with highest capabilities, and facilitate experimental testing of these designs. In order to determine the specific design parameters that maximize power output and the efficient use of ²³⁸Pu for future RTG units, MCNP6 simulations have been used to characterize the effects of modifying fuel composition, geometry, and porosity, as well as introducing neutron moderating, reflecting, and shielding materials to the system. Although this project is currently in the preliminary stages, the final deliverables will include sophisticated designs and simulation models that define all characteristics of multiple novel RTG fuels, detailed enough to allow immediate fabrication and testing. Preliminary work has consisted of developing a benchmark model to accurately represent the ²³⁸PuO₂ pellets currently in use by NASA; this model utilizes the alpha transport capabilities of MCNP6 and agrees well with experimental data. In addition, several models have been developed by varying specific parameters to investigate their effect on (α,n) and (n,fi ssion) reaction rates. Current practices in fuel processing are to exchange out the small portion of naturally occurring ¹⁸O and ¹⁷O to limit (α,n) reactions and avoid unnecessary neutron production. However, we have shown that enriching the oxide in ¹⁸O introduces a sufficient (α,n) reaction rate to support significant fission rates. For example, subcritical fission rates above 10⁸ f/cm³-s are easily achievable in cylindrical ²³⁸PuO₂ fuel pellets with a ¹⁸O enrichment of 100%, given an increase in size and a ⁹Be clad. Many viable designs exist and our intent is to discuss current results and future endeavors on this project.Keywords: radioisotope thermoelectric generators (RTG), Pu-238, subcritical reactors, (alpha, n) reactions
Procedia PDF Downloads 173153 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time
Authors: Deepak Loura
Abstract:
The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture
Procedia PDF Downloads 76152 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 50151 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain
Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha
Abstract:
Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited
Procedia PDF Downloads 297150 Catchment Nutrient Balancing Approach to Improve River Water Quality: A Case Study at the River Petteril, Cumbria, United Kingdom
Authors: Nalika S. Rajapaksha, James Airton, Amina Aboobakar, Nick Chappell, Andy Dyer
Abstract:
Nutrient pollution and their impact on water quality is a key concern in England. Many water quality issues originate from multiple sources of pollution spread across the catchment. The river water quality in England has improved since 1990s and wastewater effluent discharges into rivers now contain less phosphorus than in the past. However, excess phosphorus is still recognised as the prevailing issue for rivers failing Water Framework Directive (WFD) good ecological status. To achieve WFD Phosphorus objectives, Wastewater Treatment Works (WwTW) permit limits are becoming increasingly stringent. Nevertheless, in some rural catchments, the apportionment of Phosphorus pollution can be greater from agricultural runoff and other sources such as septic tanks. Therefore, the challenge of meeting the requirements of watercourses to deliver WFD objectives often goes beyond water company activities, providing significant opportunities to co-deliver activities in wider catchments to reduce nutrient load at source. The aim of this study was to apply the United Utilities' Catchment Systems Thinking (CaST) strategy and pilot an innovative permitting approach - Catchment Nutrient Balancing (CNB) in a rural catchment in Cumbria (the River Petteril) in collaboration with the regulator and others to achieve WFD objectives and multiple benefits. The study area is mainly agricultural land, predominantly livestock farms. The local ecology is impacted by significant nutrient inputs which require intervention to meet WFD obligations. There are a range of Phosphorus inputs into the river, including discharges from wastewater assets but also significantly from agricultural contributions. Solely focusing on the WwTW discharges would not have resolved the problem hence in order to address this issue effectively, a CNB trial was initiated at a small WwTW, targeting the removal of a total of 150kg of Phosphorus load, of which 13kg were to be reduced through the use of catchment interventions. Various catchment interventions were implemented across selected farms in the upstream of the catchment and also an innovative polonite reactive filter media was implemented at the WwTW as an alternative to traditional Phosphorus treatment methods. During the 3 years of this trial, the impact of the interventions in the catchment and the treatment works were monitored. In 2020 and 2022, it respectively achieved a 69% and 63% reduction in the phosphorus level in the catchment against the initial reduction target of 9%. Phosphorus treatment at the WwTW had a significant impact on overall load reduction. The wider catchment impact, however, was seven times greater than the initial target when wider catchment interventions were also established. While it is unlikely that all the Phosphorus load reduction was delivered exclusively from the interventions implemented though this project, this trial evidenced the enhanced benefits that can be achieved with an integrated approach, that engages all sources of pollution within the catchment - rather than focusing on a one-size-fits-all solution. Primarily, the CNB approach and the act of collaboratively engaging others, particularly the agriculture sector is likely to yield improved farm and land management performance and better compliance, which can lead to improved river quality as well as wider benefits.Keywords: agriculture, catchment nutrient balancing, phosphorus pollution, water quality, wastewater
Procedia PDF Downloads 67149 Organisational Mindfulness Case Study: A 6-Week Corporate Mindfulness Programme Significantly Enhances Organisational Well-Being
Authors: Dana Zelicha
Abstract:
A 6-week mindfulness programme was launched to improve the well being and performance of 20 managers (including the supervisor) of an international corporation in London. A unique assessment methodology was customised to the organisation’s needs, measuring four parameters: prioritising skills, listening skills, mindfulness levels and happiness levels. All parameters showed significant improvements (p < 0.01) post intervention, with a remarkable increase in listening skills and mindfulness levels. Although corporate mindfulness programmes have proven to be effective, the challenge remains the low engagement levels at home and the implementation of these tools beyond the scope of the intervention. This study has offered an innovative approach to enforce home engagement levels, which yielded promising results. The programme launched with a 2-day introduction intervention, which was followed by a 6-week training course (1 day a week; 2 hours each). Participants learned all basic principles of mindfulness such as mindfulness meditations, Mindfulness Based Stress Reduction (MBSR) techniques and Mindfulness Based Cognitive Therapy (MBCT) practices to incorporate into their professional and personal lives. The programme contained experiential mindfulness meditations and innovative mindfulness tools (OWBA-MT) created by OWBA - The Well Being Agency. Exercises included Mindful Meetings, Unitasking and Mindful Feedback. All sessions concluded with guided discussions and group reflections. One fundamental element of this programme was engagement level outside of the workshop. In the office, participants connected with a mindfulness buddy - a team member in the group with whom they could find support throughout the programme. At home, participants completed online daily mindfulness forms that varied according to weekly themes. These customised forms gave participants the opportunity to reflect on whether they made time for daily mindfulness practice, and to facilitate a sense of continuity and responsibility. At the end of the programme, the most engaged team member was crowned the ‘mindful maven’ and received a special gift. The four parameters were measured using online self-reported questionnaires, including the Listening Skills Inventory (LSI), Mindfulness Attention Awareness Scale (MAAS), Time Management Behaviour Scale (TMBS) and a modified version of the Oxford Happiness Questionnaire (OHQ). Pre-intervention questionnaires were collected at the start of the programme, and post-intervention data was collected 4-weeks following completion. Quantitative analysis using paired T-tests of means showed significant improvements, with a 23% increase in listening skills, a 22% improvement in mindfulness levels, a 12% increase in prioritising skills, and an 11% improvement in happiness levels. Participant testimonials exhibited high levels of satisfaction and the overall results indicate that the mindfulness programme substantially impacted the team. These results suggest that 6-week mindfulness programmes can improve employees’ capacities to listen and work well with others, to effectively manage time and to experience enhanced satisfaction both at work and in life. Limitations noteworthy to consider include the afterglow effect and lack of generalisability, as this study was conducted on a small and fairly homogenous sample.Keywords: corporate mindfulness, listening skills, organisational well being, prioritising skills, mindful leadership
Procedia PDF Downloads 271148 Application of the Carboxylate Platform in the Consolidated Bioconversion of Agricultural Wastes to Biofuel Precursors
Authors: Sesethu G. Njokweni, Marelize Botes, Emile W. H. Van Zyl
Abstract:
An alternative strategy to the production of bioethanol is by examining the degradability of biomass in a natural system such as the rumen of mammals. This anaerobic microbial community has higher cellulolytic activities than microbial communities from other habitats and degrades cellulose to produce volatile fatty acids (VFA), methane and CO₂. VFAs have the potential to serve as intermediate products for electrochemical conversion to hydrocarbon fuels. In vitro mimicking of this process would be more cost-effective than bioethanol production as it does not require chemical pre-treatment of biomass, a sterile environment or added enzymes. The strategies of the carboxylate platform and the co-cultures of a bovine ruminal microbiota from cannulated cows were combined in order to investigate and optimize the bioconversion of agricultural biomass (apple and grape pomace, citrus pulp, sugarcane bagasse and triticale straw) to high value VFAs as intermediates for biofuel production in a consolidated bioprocess. Optimisation of reactor conditions was investigated using five different ruminal inoculum concentrations; 5,10,15,20 and 25% with fixed pH at 6.8 and temperature at 39 ˚C. The ANKOM 200/220 fiber analyser was used to analyse in vitro neutral detergent fiber (NDF) disappearance of the feedstuffs. Fresh and cryo-frozen (5% DMSO and 50% glycerol for 3 months) rumen cultures were tested for the retainment of fermentation capacity and durability in 72 h fermentations in 125 ml serum vials using a FURO medical solutions 6-valve gas manifold to induce anaerobic conditions. Fermentation of apple pomace, triticale straw, and grape pomace showed no significant difference (P > 0.05) in the effect of 15 and 20 % inoculum concentrations for the total VFA yield. However, high performance liquid chromatographic separation within the two inoculum concentrations showed a significant difference (P < 0.05) in acetic acid yield, with 20% inoculum concentration being the optimum at 4.67 g/l. NDF disappearance of 85% in 96 h and total VFA yield of 11.5 g/l in 72 h (A/P ratio = 2.04) for apple pomace entailed that it was the optimal feedstuff for this process. The NDF disappearance and VFA yield of DMSO (82% NDF disappearance and 10.6 g/l VFA) and glycerol (90% NDF disappearance and 11.6 g/l VFA) stored rumen also showed significantly similar degradability of apple pomace with lack of treatment effect differences compared to a fresh rumen control (P > 0.05). The lack of treatment effects was a positive sign in indicating that there was no difference between the stored samples and the fresh rumen control. Retaining of the fermentation capacity within the preserved cultures suggests that its metabolic characteristics were preserved due to resilience and redundancy of the rumen culture. The amount of degradability and VFA yield within a short span was similar to other carboxylate platforms that have longer run times. This study shows that by virtue of faster rates and high extent of degradability, small scale alternatives to bioethanol such as rumen microbiomes and other natural fermenting microbiomes can be employed to enhance the feasibility of biofuels large-scale implementation.Keywords: agricultural wastes, carboxylate platform, rumen microbiome, volatile fatty acids
Procedia PDF Downloads 130147 Regulation Effect of Intestinal Microbiota by Fermented Processing Wastewater of Yuba
Authors: Ting Wu, Feiting Hu, Xinyue Zhang, Shuxin Tang, Xiaoyun Xu
Abstract:
As a by-product of yuba, processing wastewater of Yuba (PWY) contains many bioactive components such as soybean isoflavones, soybean polysaccharides and soybean oligosaccharides, which is a good source of prebiotics and has a potential of high value utilization. The use of Lactobacillus plantarum to ferment PWY can be considered as a potential biogenic element, which can regulate the balance of intestinal microbiota. In this study, firstly, Lactobacillus plantarum was used to ferment PWY to improve its content of active components and antioxidant activity. Then, the health effect of fermented processing wastewater of yuba (FPWY) was measured in vitro. Finally, microencapsulation technology was used applied to improve the sustained release of FPWY and reduce the loss of active components in the digestion process, as well as to improving the activity of FPWY. The main results are as follows: (1) FPWY presented a good antioxidant capacity with DPPH free radical scavenging ability (0.83 ± 0.01 mmol Trolox/L), ABTS free radical scavenging ability (7.47 ± 0.35 mmol Trolox/L) and iron ion reducing ability (1.11 ± 0.07 mmol Trolox/L). Compared with non-fermented processing wastewater of yuba (NFPWY), there was no significant difference in the content of total soybean isoflavones, but the content of glucoside soybean isoflavones decreased, and aglyconic soybean isoflavones increased significantly. After fermentation, PWY can effectively reduce the soluble monosaccharides, disaccharides and oligosaccharides, such as glucose, fructose, galactose, trehalose, stachyose, maltose, raffinose and sucrose. (2) FPWY can significantly enhance the growth of beneficial bacteria such as Bifidobacterium, Ruminococcus and Akkermansia, significantly inhibit the growth of harmful bacteria E.coli, regulate the structure of intestinal microbiota, and significantly increase the content of short-chain fatty acids such as acetic acid, propionic acid, butyric acid, isovaleric acid. Higher amount of lactic acid in the gut can be further broken down into short chain fatty acids. (3) In order to improve the stability of soybean isoflavones in FPWY during digestion, sodium alginate and chitosan were used as wall materials for embedding. The FPWY freeze-dried powder was embedded by the method of acute-coagulation bath. The results show that when the core wall ratio is 3:1, the concentration of chitosan is 1.5%, the concentration of sodium alginate is 2.0%, and the concentration of calcium is 3%, the embossing rate is 53.20%. In the simulated in vitro digestion stage, the release rate of microcapsules reached 59.36% at the end of gastric digestion and 82.90% at the end of intestinal digestion. Therefore, the core materials with good sustained-release performance of microcapsules were almost all released. The structural analysis results of FPWY microcapsules show that the microcapsules have good mechanical properties. Its hardness, springness, cohesiveness, gumminess, chewiness and resilience were 117.75± 0.21 g, 0.76±0.02, 0.54±0.01, 63.28±0.71 g·sec, 48.03±1.37 g·sec, 0.31±0.01, respectively. Compared with the unembedded FPWY, the infrared spectrum results showed that the microcapsules had embedded effect on the FPWY freeze-dried powder.Keywords: processing wastewater of yuba, lactobacillus plantarum, intestinal microbiota, microcapsule
Procedia PDF Downloads 77146 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 71145 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 105144 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion
Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro
Abstract:
In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment
Procedia PDF Downloads 48143 Assessment and Characterization of Dual-Hardening Adhesion Promoter for Self-Healing Mechanisms in Metal-Plastic Hybrid System
Authors: Anas Hallak, Latifa Seblini, Juergen Wilde
Abstract:
In mechatronics or sensor technology, plastic housings are used to protect sensitive components from harmful environmental influences, such as moisture, media, or reactive substances. Connections, preferably in the form of metallic lead-frame structures, through the housing wall are required for their electrical supply or control. In this system, an insufficient connection between the plastic component, e.g., Polyamide66, and the metal surface, e.g., copper, due to the incompatibility is dominating. As a result, leakage paths can occur along with the plastic-metal interface. Since adhesive bonding has been established as one of the most important joining processes and its use has expanded significantly, driven by the development of improved high-performance adhesives and bonding techniques, this technology has been involved in metal-plastic hybrid structures. In this study, an epoxy bonding agent from DELO (DUALBOND LT2266) has been used to improve the mechanical and chemical binding between the metal and the polymer. It is an adhesion promoter with two reaction stages. In these, the first stage provides fixation to the lead frame directly after the coating step, which can be done by UV-Exposure for a few seconds. In the second stage, the material will be thermally hardened during injection molding. To analyze the two reaction stages of the primer, dynamic DSC experiments were carried out and correlated with Fourier-transform infrared spectroscopy measurements. Furthermore, the number of crosslinking bonds formed in the system in each reaction stage has also been estimated by a rheological characterization. Those investigations have been performed with different times of UV exposure: 12, 96 s and in an industrial preferred temperature range from -20 to 175°C. The shear viscosity values of primer have been measured as a function of temperature and exposure times. For further interpretation, the storage modulus values have been calculated, and the so-called Booij–Palmen plot has been sketched. The next approach in this study is the self-healing mechanisms in the hydride system in which the primer should flow into micro-damage such as interface, cracks, inhibit them from growing, and close them. The ability of the primer to flow in and penetrate defined capillaries made in Ultramid was investigated. Holes with a diameter of 0.3 mm were produced in injection-molded A3EG7 plates with 4 mm thickness. A copper substrate coated with the DUALBOND was placed on the A3EG7 plate and pressed with a certain force. Metallographic analyses were carried out to verify the filling grade, which showed an almost 95% filling ratio of the capillaries. Finally, to estimate the self-healing mechanism in metal-plastic hybrid systems, characterizations have been done on a simple geometry with a metal inlay developed by the Institute of Polymer Technology in Friedrich-Alexander-University. The specimens have been modified with tungsten wire which was to be pulled out after the injection molding to create a micro-hole in the specimen at the interface between the primer and the polymer. The capability of the primer to heal those micro-cracks upon heating, pressing, and thermal aging has been characterized through metallographic analyses.Keywords: hybrid structures, self-healing, thermoplastic housing, adhesive
Procedia PDF Downloads 195142 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing
Authors: Leonie Bradfield, Stephen Fityus, John Simmons
Abstract:
The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump
Procedia PDF Downloads 162141 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations
Authors: Sini George
Abstract:
Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations
Procedia PDF Downloads 172140 Educational Audit and Curricular Reforms in the Arabian Context
Authors: Irum Naz
Abstract:
In the Arabian higher education context, linguistic proficiency in the English language is considered crucial for the developmental sustainability, economic growth, and stability of communities and societies. Qatar’s educational reforms package, through the 2030 vision, identifies the acquisition of English at K-12 as an essential survival communication tool for globalization, believing that Qatari students need better preparation to take on the responsibilities of leadership and to participate effectively in the country’s surging economy. The idea of introducing Qatari students to modern curricula benchmarked to high-student-performance curricula in developed countries is one of the components of reformatory design principles of Education for New Era reform project that is mutually consented to and supported by the Office of Shared Services, Communications Office, and Supreme Education Council. In appreciation of the government’s vision, the English Language Centre (ELC) at the Community College of Qatar ran an internal educational audit and conducted evaluative research to understand and appraise the value, impact, and practicality of the existing ELC language development program. This study sought to identify the type of change that could identify and improve the quality of Foundation Program courses and the manners in which second language learners could be assisted to transit smoothly between (ELC) levels. Following the interpretivist paradigm and mixed research method, the data was gathered through a bicyclic research model and a triangular design. The analyses of the data suggested that there was a need for improvement in the ELC program as a whole, and particularly in terms of curriculum, student learning outcomes, and the general learning environment in the department. Key findings suggest that the target program would benefit from significant revisions, which would include narrowing the focus of the courses, providing sets of specific learning objectives, and preventing repetition between levels. Another promising finding was about the assessment tools and process. The data suggested that a set of standardized assessments that more closely suited the programs of study should be devised. It was also recommended that students undergo a more comprehensive placement process to ensure that they begin the program at an appropriate level and get the maximum benefit from their learning experience. Although this ties into the idea of curriculum revamp, it was expected that students could leave the ELC having had exposure to courses in English for specific purposes. The idea of a more reliable exit assessment for students was raised frequently so ELC could regulate itself and ensure optimum learning outcomes. Another important recommendation was the provision of a Student Learning Center for students that would help them to receive personalized tuition, differentiated instruction, and self-driven and self-evaluated learning experience. In addition, an extra study level was recommended to be added to the program to accommodate the different levels of English language proficiency represented among ELC students. The evidence collected in the course of conducting the study suggests that significant change is needed in the structure of the ELC program, specifically about curriculum, the program learning outcomes, and the learning environment in general.Keywords: educational audit, ESL, optimum learning outcomes, Qatar’s educational reforms, self-driven and self-evaluated learning experience, Student Learning Center
Procedia PDF Downloads 186139 3D Printing of Polycaprolactone Scaffold with Multiscale Porosity Via Incorporation of Sacrificial Sucrose Particles
Authors: Mikaela Kutrolli, Noah S. Pereira, Vanessa Scanlon, Mohamadmahdi Samandari, Ali Tamayol
Abstract:
Bone tissue engineering has drawn significant attention and various biomaterials have been tested. Polymers such as polycaprolactone (PCL) offer excellent biocompatibility, reasonable mechanical properties, and biodegradability. However, PCL scaffolds suffer a critical drawback: a lack of micro/mesoporosity, affecting cell attachment, tissue integration, and mineralization. It also results in a slow degradation rate. While 3D-printing has addressed the issue of macroporosity through CAD-guided fabrication, PCL scaffolds still exhibit poor smaller-scale porosity. To overcome this, we generated composites of PCL, hydroxyapatite (HA), and powdered sucrose (PS). The latter serves as a sacrificial material to generate porous particles after sucrose dissolution. Additionally, we have incorporated dexamethasone (DEX) to boost the PCL osteogenic properties. The resulting scaffolds maintain controlled macroporosity from the lattice print structure but also develop micro/mesoporosity within PCL fibers when exposed to aqueous environments. The study involved mixing PS into solvent-dissolved PCL in different weight ratios of PS to PCL (70:30, 50:50, and 30:70 wt%). The resulting composite was used for 3D printing of scaffolds at room temperature. Printability was optimized by adjusting pressure, speed, and layer height through filament collapse and fusion test. Enzymatic degradation, porogen leaching, and DEX release profiles were characterized. Physical properties were assessed using wettability, SEM, and micro-CT to quantify the porosity (percentage, pore size, and interconnectivity). Raman spectroscopy was used to verify the absence of sugar after leaching. Mechanical characteristics were evaluated via compression testing before and after porogen leaching. Bone marrow stromal cells (BMSCs) behavior in the printed scaffolds was studied by assessing viability, metabolic activity, osteo-differentiation, and mineralization. The scaffolds with a 70% sugar concentration exhibited superior printability and reached the highest porosity of 80%, but performed poorly during mechanical testing. A 50% PS concentration demonstrated a 70% porosity, with an average pore size of 25 µm, favoring cell attachment. No trace of sucrose was found in Raman after leaching the sugar for 8 hours. Water contact angle results show improved hydrophilicity as the sugar concentration increased, making the scaffolds more conductive to cell adhesion. The behavior of bone marrow stromal cells (BMSCs) showed positive viability and proliferation results with an increasing trend of mineralization and osteo-differentiation as the sucrose concentration increased. The addition of HA and DEX also promoted mineralization and osteo-differentiation in the cultures. The integration of PS as porogen at a concentration of 50%wt within PCL scaffolds presents a promising approach to address the poor cell attachment and tissue integration issues of PCL in bone tissue engineering. The method allows for the fabrication of scaffolds with tunable porosity and mechanical properties, suitable for various applications. The addition of HA and DEX further enhanced the scaffolds. Future studies will apply the scaffolds in an in-vivo model to thoroughly investigate their performance.Keywords: bone, PCL, 3D printing, tissue engineering
Procedia PDF Downloads 59138 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning
Authors: Pinzhe Zhao
Abstract:
This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity
Procedia PDF Downloads 23137 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease
Procedia PDF Downloads 276136 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery
Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal
Abstract:
Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT
Procedia PDF Downloads 231135 Simulation-based Decision Making on Intra-hospital Patient Referral in a Collaborative Medical Alliance
Authors: Yuguang Gao, Mingtao Deng
Abstract:
The integration of independently operating hospitals into a unified healthcare service system has become a strategic imperative in the pursuit of hospitals’ high-quality development. Central to the concept of group governance over such transformation, exemplified by a collaborative medical alliance, is the delineation of shared value, vision, and goals. Given the inherent disparity in capabilities among hospitals within the alliance, particularly in the treatment of different diseases characterized by Disease Related Groups (DRG) in terms of effectiveness, efficiency and resource utilization, this study aims to address the centralized decision-making of intra-hospital patient referral within the medical alliance to enhance the overall production and quality of service provided. We first introduce the notion of production utility, where a higher production utility for a hospital implies better performance in treating patients diagnosed with that specific DRG group of diseases. Then, a Discrete-Event Simulation (DES) framework is established for patient referral among hospitals, where patient flow modeling incorporates a queueing system with fixed capacities for each hospital. The simulation study begins with a two-member alliance. The pivotal strategy examined is a "whether-to-refer" decision triggered when the bed usage rate surpasses a predefined threshold for either hospital. Then, the decision encompasses referring patients to the other hospital based on DRG groups’ production utility differentials as well as bed availability. The objective is to maximize the total production utility of the alliance while minimizing patients’ average length of stay and turnover rate. Thus the parameter under scrutiny is the bed usage rate threshold, influencing the efficacy of the referral strategy. Extending the study to a three-member alliance, which could readily be generalized to multi-member alliances, we maintain the core setup while introducing an additional “which-to-refer" decision that involves referring patients with specific DRG groups to the member hospital according to their respective production utility rankings. The overarching goal remains consistent, for which the bed usage rate threshold is once again a focal point for analysis. For the two-member alliance scenario, our simulation results indicate that the optimal bed usage rate threshold hinges on the discrepancy in the number of beds between member hospitals, the distribution of DRG groups among incoming patients, and variations in production utilities across hospitals. Transitioning to the three-member alliance, we observe similar dependencies on these parameters. Additionally, it becomes evident that an imbalanced distribution of DRG diagnoses and further disparity in production utilities among member hospitals may lead to an increase in the turnover rate. In general, it was found that the intra-hospital referral mechanism enhances the overall production utility of the medical alliance compared to individual hospitals without partnership. Patients’ average length of stay is also reduced, showcasing the positive impact of the collaborative approach. However, the turnover rate exhibits variability based on parameter setups, particularly when patients are redirected within the alliance. In conclusion, the re-structuring of diagnostic disease groups within the medical alliance proves instrumental in improving overall healthcare service outcomes, providing a compelling rationale for the government's promotion of patient referrals within collaborative medical alliances.Keywords: collaborative medical alliance, disease related group, patient referral, simulation
Procedia PDF Downloads 60